Why You Need a Cloud Data Warehouse

Are you new to the concepts of data warehousing? Do you want to know how your enterprise can benefit from using a data warehouse to streamline your business, better serve your customers and create new market opportunities? If so, this blog is for you. Let’s cover the basics.

It begins with production data

Day-to-day, nearly all enterprise-class companies process data as part of core business operations. Banks process debit and credit transactions for account holders. Brick-and-mortar and online retailers process in-store and website purchases, respectively. Insurance companies maintain and update customer profile and insurance policy information for policyholders.

The nature of these production systems is transactional and require databases that can capture, write, update or delete information at the pace of business operations. The systems behind these transactions are online transaction processing (OLTP) databases. For example, OLTP databases for investment firms must operate at lightning speed to keep up with high-volume stock and bond trading activity that occur in fractions of a second.

The need for a data warehouse solution

In addition to capturing transactions, another aspect of business operations is to understand what’s happening, or what has happened, based on the information captured with OLTP databases. By this, I mean companies must not only know how much revenue is coming in, they must know where revenue is coming from, the profile of customers making the purchases, business trends (up or down), the products and services being purchased and when those transactions are taking place. And, certainly businesses need to know what it will take for customers to remain loyal and buy more. Answers and insights to these questions are necessary to develop strategic business plans and develop new products that will keep businesses growing.

Why transactional (OLTP) systems are not optimized for data warehousing

Acquiring these insights requires accumulating, synthesizing and analyzing the influx of data from OLTP databases. The aggregation of all this data results in very large data sets for analytics. In contrast, when OLTP systems capture and update data, the amount of data transacted upon is actually very small. However, OLTP systems will execute thousands upon thousands of  small transactions at a time. This is what OLTP systems are optimized to do; however, OLTP systems are not optimized for the analysis of large to extremely large data sets.  

This is why data warehousing solutions emerged. Data warehouse solutions will hold a copy of data stored in OLTP databases. In addition, data warehouses also hold exponentially larger amounts of data accessed by enterprises, thanks to the enormous amount of Internet and cloud-born data. Ideally, data warehouses should be optimized to handle analytics on data sets of any size.  A typical data warehouse will have two primary components: One, a database (or a collection of databases) to store all of the data copied from the production system; and two, a query engine, which will enable a user, a program or an application to ask questions of the data and present an answer.

Benefits of deploying a data warehouse

As previously stated, with a data warehouse, you ask and find answers to questions such as:

  • What’s the revenue?
  • Who’s buying?
  • What’s the profile of customers?
  • What pages did they visit on our website?
  • What caught their attention?
  • Which customers are buying which products?

With native language processing and other deep learning capabilities gaining popularity, you can even develop insights about the sentiment of prospects and potential customers as they journey towards your enterprise.

Benefits of data warehousing… in the cloud

Many data warehouses deployed today were developed during the 1980s and were built for on-premises data centers typical of the time. These solutions still exist, including availability of “cloud-washed” versions. Both options typically involve upfront licensing charges to buy and to maintain these legacy data warehouses. Yet, neither legacy data warehouses (0r current generation data lakes based on Hadoop) can elastically scale up, down, or suspend as needed to meet the continuously varying demands of today’s enterprises.

 

As result, these types of solutions require a lot attention on low-level infrastructure tasks that divert IT and data science teams from truly strategic analytics projects that advance the business.

With modern, cloud-built data warehouse technology now available, such as Snowflake, you can gather even more data from a multitude of data sources and instantly and elastically scale to support virtually unlimited users and workloads.

All of this is accomplished while ensuring the integrity and consistency of a single source of truth without a fight for computing resources. This includes a mix of data varieties, such as structured data and semi-structured data. As a modern cloud service, you can have any number of users query data easily, in a fully relational manner using familiar tools, all with better security, performance, data protection and ease-of-use that are built-in.

For these reasons, you can expect enterprises to turn to companies like Snowflake to help propel insights from your data in new directions and at new speeds, regardless the size of the business or industry in which you compete.

 

How Usage-Based Pricing Delivers a Budget-Friendly Cloud Data Warehouse

When you have budgets and business plans you need to stick to, you want clear pricing when operating your cloud data warehouse. However, it can be a frustrating task to understand how some cloud data warehouses charge for services. Each vendor has its own pricing model. Users expect differences. However, it’s more about the hidden charges, quotas and other penalties that are frustrating.

Here are just some of the ways Snowflake helps you to manage your data warehouse budget easily, cost-effectively and efficiently.

A simple data warehouse pricing model

Snowflake’s pricing model includes only two items: the cost of storage and the cost of compute resources consumed. The charge for storage is per terabyte, compressed, per month. The charge for compute is based on the processing units, which we refer to as credits, consumed to run your queries or perform a service (for example, Snowpipe data loading). Compute charges are billed on actual usage, per second. All of our Snowflake editions and pricing details can be found here on our website.   

All charges are usage-based

As examples, using the US as a reference, Snowflake storage costs can begin at a flat rate of $23/TB, average compressed amount, per month accrued daily. Compute costs $0.00056 per second, per credit, for our Snowflake On Demand Standard Edition. Our Snowflake On Demand Enterprise Sensitive Data Edition (which includes HIPAA compliance, PCI compliance, customer managed encryption keys and other security hardened features) is $0.0011 per second, per compute credit.

Clear and straightforward compute sizing  

Once you’re in Snowflake, you can enable any number of “virtual data warehouses”, which are effectively the compute engines that power query execution. Virtual data warehouses are available in eight “T-shirt” style sizes: X-Small, Small, Medium, Large, and X- to 4X-Large. Each data warehouse size has a compute credit designation. As you go up in size, credits usage will vary. Refer to Table 1.

Table 1 – Snowflake Warehouse Sizes and Credit Usage

Based on our internal benchmarks, performance improves linearly as the size of of warehouses increases.

No charge for idle compute time

All queries run automatically from the data warehouse from which a query is launched.  What’s cool about Snowflake is that it allows you to specify a warehouse to shift into suspend mode if no queries are actively running. Once suspended, charges are also suspended for idle compute time (Figure 1).

Figure 1 – Snowflake Pricing Tracks With Actual Usage

Automatically restart the warehouse to process a new query. No manual re-provisioning required. This is more graceful than terminating a service, as required with other cloud-based solutions, to stop charges.

When you terminate services (to stop billing), with other cloud data warehouse solutions, you lose the data since it will be unloaded from the data warehouse. If you later need to run queries against that data, you will have to re-provision the data warehouse and reload the data all over again. This is disruptive and inefficient. Because of this disruption, these cloud data warehouse solutions must remain on and active, 24×7, with the meter running, whether you are running queries or not. Further, with other cloud data warehouses, if terminated services are restarted, unused credits do not roll over.

No hidden quotas or price bumps

When you run queries in Snowflake, there are no added usage quotas or hidden price premiums. You pay only for what you use.

Other cloud data warehouse solutions may not charge based on time, but rather will charge based on how many terabytes are scanned or how many terabytes are returned from a query. In addition, you may have no ability to control the compute resources available to you because you are given an allocation of processing units or slots. And if the query you run consumes more than the allocation provided to you, you are charged a premium.

True cloud-scale elasticity

On-premises data warehouses, particularly, make it nearly impossible to scale down hardware once installed and provisioned. This is cost prohibitive because most on-premises implementations are pre-sized and purchased for peak demand period, which could be for just few days a month or year, leaving massive investments idle the rest of the time.

Keeping it simple, cost-effective and price-efficient

Unlike traditional data warehouse technology (on-premises or in the cloud) never designed for the cloud, Snowflake’s cloud-built data warehouse-as-a-service makes analytics and budgeting easy. We also make it as cost-effective as possible for you to get all the insight from all your data at a fraction of the time required by your current solution. By avoiding the headaches of traditional solutions, Snowflake enables you to focus on strategic data analytics and engineering projects that advance your business.

 

The Dream Data Warehouse Development Environment

Earlier this month, Snowflake’s Customer Enablement Team was assigned an email from one of our customers. The customer stated that he was not happy about the idea of cloning full copies of production databases for development purposes. “Do we really want to develop and maintain a system to copy a fraction of the production DB to dev?”, citing the reason for his message that, by just copying the entire production database, the dev team would have access to too much data. Being a veteran of Snowflake, I initially dismissed his concern because of Snowflake’s zero-copy clone capability, as outlined in this article. From my perspective, the zero-copy clone would not incur any additional cost for development purposes, so why not give the dev team all of the data?

The answer of course, as the customer pointed out, has to do with making the development effort more efficient. The assumption of zero-copy clone equating to zero-cost development is, of course, incorrect. There’s the cost of querying the data (requiring virtual warehouse credits) and the cost behind each development hour. After all, longer query times lead to longer development iterations and longer testing cycles. To create a “blissful” development environment in Snowflake, we need a more refined approach towards building the development data set.

The approach outlined by the customer was rooted in complimenting Snowflake’s zero-copy clone with the additional technique of using Block Sampling. In other words, they proposed creating a view containing a sample set from the original table. This approach enables an administrator to quickly set up a dev environment with minimal data. Controlling the amount of sample data is a good thing for many development situations because developers seldom require access to the full dataset.

Ok, let’s take a look at SAMPLE / TABLESAMPLE and see how we can do this. The syntax is quite simple:

 
 SELECT ...
 FROM ...
   { SAMPLE | TABLESAMPLE } [ samplingMethod ] ( <probability> ) [ { REPEATABLE | SEED } ( <seed> ) ]
 [ ... ]
 
 -- Where:
    samplingMethod :: = { { BERNOULLI | ROW } | { SYSTEM | BLOCK } }


Note that there are some interchangeable terms we should pay attention to when writing the query. These terms are synonyms and only differ in syntax:

SAMPLE | TABLESAMPLE

BERNOULLI | ROW

SYSTEM | BLOCK

REPEATABLE | SEED

The two main methods of sampling are ROW (or BERNOULLI) and BLOCK (or SYSTEM) sampling. Let’s take a closer look at each one. 

Row Sampling

This approach uses the Bernoulli principle to select data by applying a probability of p/100 to each row. In the Snowflake documentation, we mention that this is similar to “flipping a weighted coin” on each row. The number of sampled (selected) rows should be equal to (p/100) * n, where n is the total number of rows in the table and p is the sample probability value set by the user.

This method implies that we will iterate through each row and calculate a probability at each row to match a row candidate. This iteration is going to impact query performance. However, the advantage here is that we will end up with a formal and concise distribution of samples from our data. Regarding creating a dev environment, using the Bernoulli sampling method to create a view does not result in better performance. However, you could certainly create a new data table based on this sampling method and still scan fewer data. For example:

create table dev_Sales_LineItem_Bernoulli_20 as

    select *

    from SALES.PUBLIC.LINEITEM

    sample bernoulli (20);

This statement will create a development table from the LINEITEM table in our SALES database with 20% of the rows from the original table.

I won’t focus too much on this method in this post. You can feel free to experiment on your own to see if this sampling method would suit your needs. Instead, I will talk more about Block Sampling.

Block Sampling

In this approach, we apply a probability of p/100 to each block of rows. For those who are familiar with Snowflake’s micro partitions (details below), block sampling chooses individual partitions based on a specific probability. If you simply want quick access to production data and to run queries against a small percentage of the rows in a table, leveraging Block Sampling is a good way to go.

Let’s look at this with a specific example using the Block Sampling method on the LINEITEM table in the SALES database. Here are the table metadata:

In this example, I have a developer who is working on an aggregation query against the LINEITEM table. He created a simple query like this:

select

l_returnflag,

l_linestatus,

sum(l_quantity) as sum_qty,

sum(l_extendedprice) as sum_base_price,

sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,

sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,

avg(l_quantity) as avg_qty,

avg(l_extendedprice) as avg_price,

avg(l_discount) as avg_disc,

count(*) as count_order

from

lineitem

where

l_orderdate <= dateadd(day, -90, to_date('1998-12-01'))

group by

l_returnflag,

l_linestatus

order by

l_returnflag,

L_linestatus;

Result:

 

This query reports the amount of business that was billed, shipped and returned. If the developer has decided to issue a query against this table, we would see the following outcome in our query plan. (Note: the below result was performed on an XS warehouse):


It took about 19 seconds on an XS warehouse, which is not bad by any means. However, we can see that the query still performed a full table scan and performed a pretty good amount of aggregation.

Now, let’s see how we can improve the experience for the developer by creating a sample set of the LINEITEM table. We’ll create a special view (against our cloned dev database) for this developer and call it V_SALES_LINEITEM_SAMPLE_10. In this view, we’ll use the Block Sampling technique and only scan 10% of the partitions required by this query.

First create the view:

create view v_Sales_LineItem_Sample_10 as

    select *

    from SALES.PUBLIC.LINEITEM

    sample block (10);

Next, let’s update the query to use the view instead.

select

       l_returnflag,

       l_linestatus,

       sum(l_quantity) as sum_qty,

       sum(l_extendedprice) as sum_base_price,

       sum(l_extendedprice * (1 - l_discount)) as sum_disc_price,

       sum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,

       avg(l_quantity) as avg_qty,

       avg(l_extendedprice) as avg_price,

       avg(l_discount) as avg_disc,

       count(*) as count_order

  from

       v_Sales_LineItem_Sample_10

where

       l_orderdate <= dateadd(day, -90, to_date('1998-12-01'))

group by

       l_returnflag,

       l_linestatus

 order by

       l_returnflag,

       l_linestatus;

Result:

 

Let’s take a look at the query profile for this query:

 

This query ran about 2.5 seconds on an XS warehouse, and we scanned about 10% of the total table partition. This result is much better! Snowflake is sampling 10% of the partition to pull based on the WHERE filter. The result should still be accurate enough to let the developer know this query is working as expected.

In conclusion, leveraging Snowflake’s SAMPLE clause considerably reduces the amount of test dataset for a developer without losing data integrity. Even if a specific id or a timestamp biases the native data, chances are the developer is going to accept the nuance and continue with the development instead of spending additional compute credit and hours waiting for the results to return. The alternative is to use the LIMIT function perhaps to reduce the number of rows being returned. However, this involves modifying the original query or creating a view that still needs to be maintained. Using the SAMPLE clause, you can reliably subset a large table with acceptable result mix and performance. I hope you find this helpful. Feel free to leave comments in our community and let me know your thoughts.

 

Deliveroo Delivers with Real-time Data

In a field of struggling food delivery startups, one notable success story has emerged from the fray. Termed “the European unicorn” by TechCrunch, Deliveroo is a British startup that offers fast and reliable food delivery service from a premium network of restaurants.

Deliveroo recently raised a $385 million funding round, boasts an estimated $2 billion valuation and is credited with transforming the way people think about food delivery. What is this unicorn doing differently? How has it found success where so many others have failed?

“Data is baked into every aspect of the organization,” Deliveroo’s head of business intelligence, Henry Crawford said. “Having instant access to data reveals which geographic areas are experiencing a shortage of restaurants and a shortage of particular cuisines so we can create these hubs right at the consumer’s doorstep.”

Deliveroo analyzes customer behavior, gains insights into market trends and responds with swift decisions and rapid execution by using data-driven insights. Snowflake makes all of this possible.

“With data coming from a variety of sources, including web traffic, transactions and customer behavior, having a data warehouse built for the cloud provides one repository for a single source of truth,” Henry explains.“The shift to Snowflake’s cloud data warehouse has enabled us to make good on our promise that got Deliveroo started: To connect consumers with great food from great restaurants, wherever you are, and whatever it takes.“

Snowflake also accommodates Deliveroo’s 650% growth in 2016. Such rapid momentum prompted Deliveroo to expand its business intelligence team from two employees to 14. Additional team members triggered the need for more access to the same data but without impacting performance.

Since Snowflake is built for the cloud, an unlimited number of users can access all of an organization’s data from a single repository, which is critical to Deliveroo’s success. There’s no replicating data, shifting queries and other workloads to non-business hours, or queueing users to preserve performance. Instead, Snowflake’s true cloud elasticity means Deliveroo can automatically scale up, down and out (concurrency) to load and analyze data without disruption.

“None of these future plans would be possible without real-time, concurrent access to massive volumes of data,” Henry said.

What’s next for Deliveroo? Using real-time logistics algorithms to increase the number and the speed of deliveries. Deliveroo’s expansion plans also include an “Editions” program—delivery-only kitchens so partner restaurants can expand their footprint without opening brick-and-mortar locations.

Learn more about how Snowflake can accelerate your data storage and analytics initiatives.