Your CIO made a strategic decision to move to AWS and you are wondering how to move your Enterprise Data Warehouse (EDW) on Teradata: should you opt for using Teradata Software Tiers ...
Check out the technologies we support to help your team optimize moving data from your data center to AWS, between AWS services, and even between AWS and other cloud platforms.
We are the only AWS partner who can say we've migrated more databases to AWS RDS, Amazon Aurora, and Amazon Redshift than any other partner in the world using AWS Database Migration Service (DMS) and Schema Conversion Tool (SCT).
See how we can orchestrate the transformation of your data, leveraging the simple, flexible, and cost-effective AWS Glue ETL service.
Configuring Amazon Data Pipeline web service, you can process the data that was locked up in on-premises storages and then efficiently transfer the results to different AWS services.
Learn how DB Best can help you analyze data in Amazon S3 by getting the most out of interactive server-less AWS Athena query service.
Learn how DB Best can help you collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information with Amazon Kinesis.
DB Best can guide your business to run and scale Apache Hadoop, Spark, HBase, Presto, Hive, and other Big Data Frameworks in the Amazon cloud leveraging Amazon EMR service.
Additional tools we've used with AWS
Data integration platforms now support moving data between your data center and AWS. Our team has worked with the following AWS data integration solutions, and have a deep understanding of how to build hybrid solutions to optimize performance.
Discover how we can help you use Attunity solutions to integrate your data from existing on-premises systems to the cloud, big data, and IoT systems within a short period of time. DB Best can flawlessly replicate your data between different types of databases and handle any data changes.
Using Informatica solutions we can integrate your data from existing on-premises systems to the cloud, big data, and IoT systems smoothly and cost-effectively. Our solution architects will migrate the data accelerating time to value for your business goals.
We offer a fast and cost-effective way to connect. Clean and share cloud and on-premises data using the architecture that easily scales to respond to growing business requests.
Learn how we can integrate your data from existing on-premises systems to the cloud, big data, and IoT systems leveraging IBM Datastage solutions. With our experience, you can to migrate your data really quickly.
Check out our Oracle Golden Gate technology solutions to facilitate real-time data integration, replication, transformations, and verification in heterogeneous IT environments.
Put our Microsoft SQL Server Integration Services experience to use for seamless data extraction, transformation, and loading operations of any complexity.
Perfectly orchestrate the transformation of your data and provide a wide range of analytical services with DB Best, powered by Azure Data Factory.
Automate your data pipelines with our services based on Cloud Dataflow to minimize latency and maximize utilization.
Open source data integration solutions for AWS
For organizations looking to use open-source data integration solutions, our team supports the following Apache solutions for performing data integration tasks and storage. We can also migrate solutions using the Apache technologies to AWS data integration tools like AWS Glue to take advantage of serverless computing, performance, security, and integration with other AWS solutions.
- Apache Hadoop is a distributed computing platform. This includes the Hadoop Distributed Filesystem (HDFS) and an implementation of MapReduce. Implemented on AWS as Elastic Map Reduce (EMR).
- Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google’s Bigtable. Amazon Redshift supplies similar capabilities.
- Apache Hive is a data warehouse software which facilitates querying and managing large datasets residing in distributed storage with tools to enable easy data extract/transform/load (ETL) to HDFS and other data stores like HBase. Implemented as Amazon Athena.
- Apache CouchDB is a database which completely embraces the web by storing your data with JSON documents. Implemented on AWS as DynamoDB.
- Apache Spark is a fast and general engine for large-scale data processing. It offers high-level APIs in Java, Scala and Python as well as a rich set of libraries including stream processing, machine learning, and graph analytics.
- Apache Cassandra database provides scalability and high availability with linear scalability and fault-tolerance on commodity hardware or cloud infrastructure.
Complex Event Processing
- Apache Storm is a distributed real-time computation system. Similar to how Hadoop provides a set of general primitives for doing batch processing, Storm provides a set of general primitives for doing real-time computation. Implemented as AWS Kinesis.
- Apache Beam is a unified programming model for both batch and streaming data processing, enabling efficient execution across diverse distributed execution engines and providing extensibility points for connecting to different technologies and user communities. Implemented as Amazon Glue.
General Data Processing
- Apache Sqoop is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases. AWS Glue provides this capability.
- Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store.
- Apache Kafka is a distributed, fault tolerant, publish-subscribe messaging that can handle hundreds of megabytes of reads and writes per second from thousands of clients.
Using AWS Snowball Edge to transfer multiple terabytes of data into the Amazon cloud
One of the main concerns during large-scale database migrations to the cloud is how long the data transfer may last. When you need to move multiple terabytes of data, the migration process may last for weeks or even months. In addition, the bandwidth of your network connection becomes a limiting factor, with some security concerns possibly appearing.
So, the whole migration project becomes unsustainable, causing many customers with heavy-weight databases to abandon their cloud migration initiatives. Amazon came up with a physical solution called AWS Snowball Edge, which allows for fast and secure data transfer of up to 80 TB of data in a matter of days.
We had a great opportunity to test the latest AWS Snowball Edge device at our data-center. Being half the size of the original AWS Snowball, the latest version of the appliance can store up to 83 TB of data. This allows for speeding up large-scale data transfers, even taking into account the device shipping time.
Managing data and applications anywhere, we often face issues related to migration of huge amounts of data for our customers. So, as an Amazon partner, we received the brand new AWS Snowball Edge for testing purposes and tried to migrate our Oracle Database to Amazon Aurora PostgreSQL. Watch the following video to learn more about our experience with AWS Snowball Edge.
Check out the following blog posts to learn about some of the solutions we’ve built using the AWS data integration technologies.
One of the main concerns during large-scale database migrations to the cloud is how long will the data transfer last. When you need to move multiple terabytes of data, the migration pr...
This post continues our video blog series on the AWS Schema Conversion Tool (SCT). In our previous blog posts, we talked about using AWS SCT for transactional database migration projec...