call us toll-free +1 855 855 3600
DB Best Chronicles
Talks on Big Data, Mobile Apps, Web and Software Development

Database Compare Suite update 4.10 brings Sybase IQ support

Posted by | On July 26th, 2017 | In Database Migration, Database Upgrades | Tags: , , , , , , , , , , ,

We are happy to announce the latest release of DB Best’s in-house tool Database Compare Suite. The new version marked 4.10 brings support for yet another data warehouse platform — Sybase IQ. Watch Database Compare Suite in action in the following video and learn more about other improvements of the latest update from our blog post.

13 Major Database Platforms and counting

The support of Sybase IQ increases the total number of available database platforms in the Database Compare Suite to 13. This amazing number includes six data warehouse platforms. Moreover, in the following release, we’re planning to add support for yet another data warehouse platform.

The full list of supported database dialects includes Amazon Redshift, Azure SQL Data Warehouse, Azure SQL Database, Greenplum, IBM DB2, IBM Netezza, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, Sybase ASE, Sybase IQ, and Teradata. That means now you can use 91 database pairs for schema and data migration, synchronization and comparison.

Technical improvements

Paying additional attention to data warehouses, Database Compare Suite developers simultaneously improved specific functions related to big data operations. One of the most important data warehouse operations is the Fast Data Comparison. It allows users to perform an exceptionally fast check whether data in two selected tables is equal or not. We’ve enhanced this operation by adding an option for fast processing of floating numbers.

DBCS 4.10 floating point comparison

The problem is that each database platform stores data in its own specific format. This becomes even more significant when we talk about the float numbers. The number of stored digits after the decimal point or the rounding settings may differ. So, when we need to compare the float numbers in two tables stored on various database platforms, we need to be fairly sure that the specifics of storing would not affect the comparison results. The latest release of Database Compare Suite allows interpreting the float numbers as integers or converting them to exponential format. This approach can level the differences of data storage in various platforms. Please take care of the floating point comparison option when performing a heterogeneous data comparison operation.

DBCS 4.10 lock target table

With huge amounts of data in data warehouses, the data migration operations may last for a long time. It is essential that during the data migration operation, no one changes the options in the target table. With that in mind, we’ve added an option which allows users to lock the target table during a data migration operation. You can check it in the appropriate settings page.

Download our free trial with no strings attached on the Database Compare Suite page.

Contact us to jumpstart your database migration project.

Finding My Pages

Posted by | On July 18th, 2017 | In indexes, Kalen's Continuum, SQL Server

In a post from two weeks ago, I told you that this week I would show you how to look inside pages to see the NEXTPAGE and PREVPAGE pointers. When I started to write this, I realized that before I can show you how to look inside the pages, I need to show you how to find out what pages to look at. So I’m going to juggle this blog schedule a bit, and this week I’ll show you how to find out what pages belong to a table and next time we’ll actually look inside the pages. However, the methods I’m going to show you to find page numbers will also provide the NEXTPAGE and the PREVPAGE values, so you don’t actually have to look inside the pages to get that information.

Starting in SQL Server 2014, Microsoft included the dynamic management object sys.dm_db_database_page_allocations. This function is technically undocumented but other people have blogged about it elsewhere, and it is a very useful function to be aware of.  The function has five parameters: database_id, object_id, index_id, partition_number, and mode.  For most of the parameters, a NULL means ALL. So a NULL for object_id would mean you want all objects. Once an ALL is specified, the rest of the parameters, except mode, are ignored. So if you specify NULL for object_id, the value for index_id and partition_number are basically ignored.

I’m not going to go into all the details, but I’ll show a few examples of the use of this function.

Read the rest of this entry »

Increasing database performance by migrating from Ingres to SQL Server

Posted by | On July 18th, 2017 | In Customer Story, Database Migration, SQL Server | Tags: , , , ,

One of our customers needed to migrate their Ingres Database system to Microsoft SQL Server wanting to improve the performance of their system. We came up with a migration solution and updated the business logic module of their application. As a result, the customer achieved a sufficient performance increase, improved the database system security, and got an optimized business logic architecture.

The video below depicts the migration process in details:

Read the rest of this entry »

Architecting highly available and scalable system when migrating to SQL Server on AWS

Posted by | On July 12th, 2017 | In AWS, Customer Story, Database Migration, SQL Server | Tags: , , , , , ,

Our customer needed to reduce the licensing costs of their Oracle database. We came up with a comprehensive solution by migrating their database to SQL Server platform. We also remediated all affected application components. As a result, our customer benefited from obtaining higher value SQL Server platform with enhanced scalability and high availability approaches.

Read the rest of this entry »

What Makes an Index Clustered?

Posted by | On July 10th, 2017 | In indexes, Kalen's Continuum, SQL Server

In my previous post, I told you about what it meant for an index in SQL Server to be ‘clustered’. Usually, when we talk about SQL Server indexes, we are talking about indexes organized as B-trees (or technically, B+-trees, but we won’t go into the difference here.) For these indexes there are two basic properties that set clustered indexes apart from nonclustered.

First, the index leaf level IS the data. It is not a copy of the data or pointers to the data. When you create a clustered index, your data is reorganized and the rows and pages become the leaf level of the index.

Second, the data is sorted. As I discussed last time, this doesn’t mean it is necessarily physically sorted in consecutive slots on the pages, in contiguous pages in the file. But by following the page pointers from the first page to the last page, SQL Server can retrieve the data in order without having to perform a sort operation.

Here’s a graphic from the SQL Server documentation showing you the general structure of clustered B-tree indexes.

But not all clustered indexes have these properties. Let me tell you a bit about columnstore indexes. I won’t be able to tell you EVERYTHING about columnstore indexes, but I point you to another awesome resource for that. I’ll just tell you the basics.
SQL Server introduced a completely different way of organizing your data in SQL Server 2012 with nonclustered columnstore indexes. The name comes from the fact that the data is organized by column, not by row, as our ‘regular’ (B-tree) indexes are. Now that we have columnstore indexes, our original B-tree indexes are sometimes referred to as rowstore indexes.

Read the rest of this entry »

Enhancing SQL Server database performance by 3x acceleration of queries execution

Posted by | On July 6th, 2017 | In Azure, Customer Story, indexes, Query Tuning, SQL Server | Tags: , , , ,

A leading provider of business management solutions currently hosts many of their workloads using SQL Server databases on Azure. They faced a number of database performance issues and needed an efficient solution to resolve them. Having analyzed the customer’s system, we delivered instructions with detailed descriptions of all discovered issues and approaches to fix them. With our instructions, our customer achieved more than three times better overall query performance.

Read the rest of this entry »

Is Your Index Clustered Enough?

Posted by | On June 30th, 2017 | In indexes, Kalen's Continuum, SQL Server

When you tell SQL Server to create a regular clustered index on a table, it will completely rebuild your table. SQL Server will take your data rows and rearrange them on new pages. Your table’s pages become the leaf level of the clustered index, so sometimes we call the table a clustered table.

People usually think of clustered as meaning ‘sorted’, and for the most part I’ll go along with this. But sorting can be logical or physical, and there has been much confusion and misinformation, even from Microsoft’s own documentation.

For example:

This document says:

[If you want to return your data in sorted order] it can be a good idea to cluster, that is physically sort, the table on that column to save the cost of a sort operation every time the column is queried.

It also says:

After the row with the first value is found by using the clustered index, rows with subsequent indexed values are guaranteed to be physically adjacent.

Well, this certainly seems to imply that the sorting is physical sorting. Right after index creation, it may well be that the data is physically sorted. But what happens when you start adding rows so that new pages need to split, or deleting rows so that pages are removed? SQL Server does not reorganize the whole table just because you need to insert one new row that won’t fit.

Your data is guaranteed to be logically sorted, not physically. This means that SQL Server maintains pointers or other means of specifying data location that allows SQL Server to retrieve the data in order efficiently, but the data is not necessary stored on contiguous pages or in contiguous bytes on a page.

SQL Server keeps track of the order of pages by NEXTPAGE and PREVPAGE pointers within a page header. For example, if you have a clustered index on the LastName column in the Person table, there might be a page containing the LastName values from Price to Rodriguez and in the header the NEXTPAGE value might be a page number for a page that contains Rodriguez to Sai. The NEXTPAGE value might be one more than the page number for the page containing Price to Rodriguez, but it might not be. There is no guarantee. In the table I just looked at, Price to Rodriguez were on page 12027 and Rodriguez to Sai were on 12028. But, the next page after that was 12064, which contains LastName values Salah to Smith. So for some pages the logical order is also physical and for some it is not.

Figure 1 shows a section of a clustered table that is both physically and logically sorted. Each page has a pointer to the next page and a pointer to the previous page in sorted order. The page numbers are consecutive as we follow the pointers and the data values increase.

physically sorted CI

Figure 1: Clustered index pages that are both logically and physically in order

Read the rest of this entry »

Leveraging Amazon Web Services to build a set of video streaming applications

Posted by | On June 30th, 2017 | In Customer Story, Mobile Development, Web & Software Development | Tags: , , , , ,

One of our regular customers wanted to add video streaming functions to their existing web and mobile applications. As the developers of their original applications, we proposed to re-architect the application and move it to the Amazon cloud. The customer agreed, and we delivered a cost-effective solution for in-house video streaming, based on Amazon Web Services.

Watch the following video to learn more about our technical approach to the upgrade and development of this application.

Read the rest of this entry »

Using data extraction filters for smart database migration with AWS Schema Conversion Tool

Posted by | On June 29th, 2017 | In AWS, Data Management, Database Migration | Tags: , , , , ,

Amazon introduced the data extraction agents in the previous release of AWS Schema Conversion Tool (SCT). They allowed for migrating all the data from your on-premises data warehouse platforms to Amazon Redshift. Wait, what if you don’t need to move all those terabytes of data, and just want to settle for the data generated in the past couple of years? Starting from the version 1.0.603, SCT allows filtering the data before uploading it to Redshift. In addition, the latest SCT version brings support to rather outdated Teradata 13 and Oracle Data Warehouse 10 schemas.

In the following video, we will demonstrate how you can use the data extraction filters in the AWS Schema Conversion Tool.


Check out our new AWS Schema Conversion Tool Jumpstart offer to get you up and running fast for your migration to Amazon Redshift.

Continue reading to learn more about using the data extraction filters.

Read the rest of this entry »

Data warehouse integration to embrace data analytics

Posted by | On June 27th, 2017 | In Business Intelligence, Customer Story, Data Management | Tags: , , , ,

Our customer has been supporting Oracle based online transaction processing (OLTP) system. They decided to take advantage of the modern data analysis capabilities of SQL Server’s data warehouse features including columnstore for greater value over an Oracle based data warehouse solution. We came up with a comprehensive solution on extracting and transforming their source data and loading it into the target database. As a result, our customer benefited from obtaining a highly standardized and easily manageable solution within a short period of time.

Read the rest of this entry »