How the Modern Data Warehouse Helps Ease the Burden of GDPR

While many activities driving an organization’s GDPR compliance are in the organization’s own hands, its IT vendors should help satisfy their customers’ compliance requirements. At a minimum, an organization’s SaaS vendors should satisfy the security requirements that wholly reside in their domain but impact their customer’s business and data security. Snowflake was built from the ground up and in ways that can ease the burden of complying with GDPR, especially for security – and customer-focused organizations.

Snowflake was designed from the beginning to handle an enormous amount of structured and semi-structured data with the ease of standard SQL. The accessibility and simplicity of SQL gives organizations the flexibility to seamlessly make any updates, changes or deletions required under GDPR. Snowflake’s support for semi-structured data can make it easier to adapt to new fields and other record changes. In addition, delivering industry-best security has been fundamental to the architecture, implementation and operation of Snowflake’s data warehouse-as-a-service since day one.

A core principle of GDPR

A major GDPR compliance factor is understanding what data an organization holds and to whom it relates. This requirement demands that data is structured, organized and easy to search.

Snowflake’s relational, SQL database architecture provides a significantly simplified structure and organization, ensuring that each record has a unique and easily identified location within the database. Snowflake customers can also combine relational storage with Snowflake’s Variant column type for semi-structured data. This approach extends the simplicity of the relational format to the schema flexibility of semi-structured data.

Snowflake is made even more powerful by its ability to support massive concurrency. With larger organizations, there may be dozens or even hundreds of concurrent data modifications, queries and searches occuring at any one time. Traditional data warehouses can’t scale beyond a single cluster of compute at any given time, leading to long queues and delayed compliance. Snowflake’s multi-cluster, shared data architecture solves this problem by enabling as many unique clusters of compute resources for any purpose, leading to more efficient workload isolation and query throughput. Anyone can store, organize, modify, search and query very large amounts of data with as many concurrent users or operations as necessary.

Data subject rights

Organizations affected by GDPR must ensure they can comply with data subject requests. Individuals now have significantly expanded rights for learning about what type of data an organization holds about them and the right to request accessing and/or correcting their data, having the data deleted, and/or porting the data to a new provider. When providing these services, organizations must respond fairly quickly, generally within 30 days. Therefore they must quickly search their business systems and data warehouse to locate all personal data related to an individual and take action.

Organizations can greatly benefit from storing all their data in a data warehouse-as-a-service with full DML and SQL capabilities. It eases the burden of searching various discrete business systems and data stores to locate the relevant data. This helps to ensure that individual records can be searched, deleted, restricted, updated, truncated, split and otherwise manipulated to align with data subject requests. It also makes it possible to move data to comply with a “right to portability” request. From the beginning, Snowflake was architected with ANSI-standard SQL and full DML to ensure these types of operations are possible.

Security

Unfortunately, many traditional data warehouses require security to be home-built and cobbled together with services outside of the core offering. What’s more, they may not even enable encryption out of the box.

As the modern data warehouse built for the cloud, Snowflake was built to ensure stringent security as a key feature of the service. Snowflake has key, built-in protections and security features, including:

  • Zero management – Snowflake reduces complexity with built-in performance, security and high availability so there’s no infrastructure to tweak, no knobs to turn and no tuning required.
  • Encryption everywhere – Snowflake automatically encrypts all data at rest and in transit.
  • Comprehensive protection – Security features include multi-factor authentication, role-based access control, IP address whitelisting, federated authentication and annual rekeying of encrypted data.
  • Tri-Secret Secure – Ensures customer control and data protection by combining a customer-provided encryption key along with a Snowflake-provided encryption key and user credentials.
  • Support for AWS Private LinkCustomers can transmit data between their virtual private network and Snowflake without accessing the Internet, making inter-network connectivity secure and easier to manage.
  • Stronger intra-company data demarcation through Snowflake Data Sharing – Leverage Snowflake’s data sharing features to share non-PII data with other teams in your organization who don’t need access by enforcing stronger security and GDPR controls.
  • Private deployment – Enterprises can get a dedicated and managed instance of Snowflake within a separate AWS Virtual Private Cloud (VPC).

Accountability

To add to the complexity, organizations must also ensure they and the organizations and tools they work with are able to demonstrate compliance. Snowflake aggressively audits and perfects its security practice on an ongoing basis, with regular penetration testing. The Snowflake data warehouse-as-a-service is SOC 2 Type II certified, PCI DSS compliant and supports HIPAA compliance and customers can audit data as it has been manipulated to comply with data subject requests.  

In addition to these out of the box capabilities and validations, Snowflake also provides customers with our Data Protection Addendum, which is tightly aligned with GDPR requirements. Snowflake also adheres to robust contractual security commitments to facilitate more efficient transactions and simplified due diligence.

Conclusion

Under the GDPR, companies must implement technical measures that will help them respond to the data protection and privacy needs of their customers. Snowflake provides not only the benefit of storing all critical customer data in a single location, it enables rapid location and retrieval of that data so businesses can take action.

 

subscribe to the snowflake blog

Boost Your Analytics with Machine Learning and Advanced Data Preparation

Enterprises can now harness the power of Apache Spark to quickly and easily prepare data and build Machine Learning (ML) models directly against that data in Snowflake. Snowflake and Qubole make it easy to get started by embedding required drivers, securing credentials, simplifying connection setup and optimizing in-database processing. Customers can focus on getting started quickly with their data preparation and ML initiatives instead of worrying about complex integrations and the cost of moving large data sets.

Setting up the Snowflake connection and getting started takes only a few minutes. Customers first create a Snowflake data store in Qubole and enter details for their Snowflake data warehouse. All drivers and packages are preloaded and kept up to date, eliminating manual bootstrapping of jars into the Spark cluster. There is no further configuration or tuning required and there are no added costs for the integration. Once the connection is saved, customers can browse their snowflake tables, view metadata and see a preview of the Snowflake data all from the Qubole interface. They can then use Zeppelin notebooks to get started reading and writing data to Snowflake as they begin exploring data preparation and ML use cases.

Below is an example of the object browser view showing the available tables and properties:

Security is also handled seamlessly so customers can focus on getting started with their data, without the worry of over-protecting their credentials. Qubole provides centralized and secure credential management which eliminates the need to specify any credentials in plain text. Username and password are entered only when setting up the data store, but are otherwise inaccessible.

The solution is also designed for enterprise requirements and allows customers to use federated authentication and SSO via the embedded Snowflake drivers. With SSO enabled, customers can authenticate through an external, SAML 2.0-compliant identity provider (IdP) and achieve a higher level of security and simplicity. These capabilities help customers more easily share notebooks and collaborate on projects with little risk of sensitive information being exposed.

Below is a sample Scala program showing how to read from Snowflake using the data store object without specifying any credentials in plain text:

Beyond the simplicity and security of the integration, which helps customers get started quickly, customers will also benefit from a highly optimized Spark integration that uses Snowflake query pushdown to balance the query-processing power of Snowflake with the computational capabilities of Apache Spark. From simple projection and filter operations to advanced operations such as joins, aggregations and even scalar SQL functions, query pushdown runs these operations in Snowflake (where the data resides) to help refine and pre-filter the data before it is read into Spark. The traditional performance and cost challenges associated with moving large amounts data for processing are thereby eliminated without additional overhead or management.

With Snowflake and Qubole, customers get an optimized platform for advanced data preparation and ML that makes it simple to get started. Customers can complement their existing cloud data warehouse strategy or get more value out of ML initiatives by opening access to more data. To learn more about ML and advanced data preparation with Qubole, visit the Qubole blog.

Try Snowflake for free. Sign up and receive $400 US dollars worth of free usage. You can create a sandbox or launch a production implementation from the same Snowflake environment.

How to Easily Load XML with SQL

With this two-part blog, we review how to load XML data into Snowflake and query with ANSI-standard SQL. Doing so will enable you to have full relational database capabilities for XML, despite it being a document store, non-relational, data format.

And if you’re asking yourself: “XML, is it still being used?” Rest assured, the answer is a resounding, “yes”! Even though JSON has overtaken XML as the preferred light-weight, easy-to-understand data exchange file format of choice, XML remains very well entrenched in corporate IT. For example, XML has many use cases from desktop office applications to SOA-based enterprise applications to a variety of APIs and industry-level data-exchange protocol standards. In regard to industry standards specifically, there’s HL7 for the healthcare industry, there’s FpML for financial services, and OTA for travel. All indications are XML will continue to thrive and evolve.

Given the prevalence of XML, you’ll want a data warehouse that makes it easy to query XML to ask questions of the data and to produce insights.

Other approaches require you to first transform XML before loading into a SQL database. This can add complexity or delays since you must use a separate tool to transform the XML data. Otherwise, you’ll have to custom-write the transformation scripts yourself. With Snowflake, you can ingest XML direct into the solution. No pre-transformation required. Thanks to Snowflake’s patented VARIANT data type, XML data can be stored as its own columnar database table or inserted into an existing table in Snowflake.

The following steps describe how.

Gathering sample XML data

First, let’s gather XML data. For our sample, we use publicly available US Treasury auction data. The data can be found here. This website provides a record of all Treasury auction announcements for T-bills, notes, bonds, etc., over the past decade (from 2008 to 2017) formatted as XML data. In total, the site contains about 25,000 XML files. We’ve used five as examples for this post.

Figure 1 provides a read-out of one of the XML-based, Treasury auction data files:

Figure 1. U.S. Treasury Auction Data in XML Format

 

Loading XML data into Snowflake

Next, we are going to load the data into Snowflake. Snowflake provides an intuitive UI, which makes it easy to load and prep the data to run queries for analysis. The steps are summarized as follows:

  • Create a new table with an XML column using Snowflake’s patented VARIANT data type
  • Create a new column-oriented file format for XML (“columnarizing” the XML), to be used with the new table
  • Load the XML sample data into the XML column of the new table

The actual sequence of steps is as follows:

Create a new table

  • Create a table with XML column with VARIANT data type
  • From the Snowflake UI, select the database into which the XML data will be loaded (e.g., ‘TUTORIAL_DB’ for our example here). [To learn how to create a new database, click here].
  • From the database tool bar, click on Tables->Create
  • Enter the table name as ‘treasury_auction_xml’. The default schema name is ‘Public’. Specify the schema (column and data type as shown):

 

 

If you prefer to work with SQL statements for a script or a workbook, you can click on “Show SQL” below the Create Table dialog box to display the equivalent SQL statements that can be cut and pasted into your script. Snowflake provides this option for most selection boxes. Doing so for this example will reveal:

Create a new file format for XML

  • As part of the data loading process, columnarize the XML via a new file format:
    Select the database ‘TUTORIAL_DB’, click on the table ‘treasury_auction_xml’.
  • Click on ‘Load Table’, a series of pop-up windows will be displayed to guide the user through the data loading process. Select the warehouse you designate for this exercise. Next, specify the location of the XML files to be loaded. The dialog box for our example is shown below:

  • Next, in the file format step, click on ‘+’ and a pop-up window is displayed. Specify the various options as shown below:

Complete the loading process

Select ‘Load Options’. This is the last step to complete the loading of XML data into the new table with an XML column. A few different error handling options are presented as shown. Select the choice that is appropriate for your requirement.

  • For this example, we choose to skip an XML file if an error is detected in parsing. Click on ‘Load’ to finish.

  • Load result is displayed in the example below, showing that successful loading a number of XML files into the new table treasury_auction_xml.

Querying the XML data

With the XML data loaded, you can easily query the data in a fully relational manner, expressing queries with robust ANSI SQL. For example, for a quick glance of all of the XML documents loaded into the table, execute this query: SELECT src_xml FROM treasure_auction_xml.

This query will return all of the XML documents in the XML column. The following is a portion of the output:

 

 

This is just a quick example of how you can easily query XML with Snowflake with standard SQL. In addition, you have an assortment of XML functions to query XML elements in the XML files. This includes a powerful FLATTEN LATERAL table-value function, which is an un-nesting function that will allow you to access the inherent hierarchical structures within an XML file.

 

Subscribe to the snowflake blog

Data is Only Transformative with Transformative Technology

At the recent AWS re:Invent show in Las Vegas, The Cube host, Lisa Martin, had the chance to sit down with Bob Muglia, CEO and President of Snowflake. Bob shared his thoughts on Snowflake’s latest addition to its cloud-built data warehouse, Snowpipe, while looking back at Snowflake’s origins and ahead to its future in order to enable the data-driven enterprise.

What is Snowpipe, and how do customers get started with it?

Muglia: Snowpipe is a way of ingesting data into Snowflake in a streaming, continuous way. You simply drop new data that’s coming into S3 and we ingest it for you automatically. Snowpipe makes it simple to bring the data into your data warehouse on a continuous basis, ensuring that you’re always up-to-date and that your analysts are getting the latest insights and the latest data.

In the five years since you launched, how has the opportunity around cloud data warehousing changed? How has Snowflake evolved to become a leader in this space?

Muglia: If you go back five years, this was a timeframe where NoSQL was all the rage. Everybody was talking about how SQL was passé and something you’re not going to see in the future. Our founders had a different view. They had been working on true relational databases for almost 20 years, and they recognized the power of SQL and relational database technology. But they also saw that customers were experiencing significant limitations with existing technology. They saw in the cloud, and in what Amazon had done, the ability to build an all new database that takes advantage of the full elasticity and power of the cloud to deliver whatever analytics the business requires. However much data you want, however many queries you want to run simultaneously, Snowflake takes what you love about a relational database and allows you to operate in a very different way. Our founders had that vision five years ago and successfully executed on it. The product has worked beyond the dreams of our customers, and that response from our customers is what we get so excited about.

How did you identify what data should even be streamed to Snowpipe?

Muglia: As an example, in entertainment we’re experiencing a data explosion. You have streaming video data, subscription data, billing data, social media data and on and on. None of this is arriving in any sort of regular format. It’s coming as semi-structured data, like JSON or XML. Up until Snowflake came onto the scene with a truly cloud-based solution for data warehousing, everyone was struggling to wrangle all these data sets. Snowpipe lets you bring in multiple data sets, merge them in real-time and get the analytics back to your business in an agile way that’s never been seen before.

How does your partnership with AWS extend Snowflake’s capabilities?

Muglia: People don’t want their data scattered all over the place. With the cloud, with what Amazon’s done and with a product like Snowflake, you can bring all of your data together. That can change the culture of a company and the way people work. All of a sudden, data is not power. Data is available to everyone, and it’s democratized so every person can work with that data and help to bring the business forward. It can really change the dynamics around the way people work.

Tell us little bit about Snowflake’s collaboration with its customers. How are they helping to influence your future?

Muglia: As a company, we run Snowflake on Snowflake. All of our data is in Snowflake, all of our sales data, our financial data, our marketing data, our product support data and our engineering data. Every time a user runs a query, that query is logged in Snowflake and the intrinsics about it are logged. When you have a tool with the power of Snowflake, you can effectively answer any business question in just a matter of minutes. And that’s transformative to the way people work. And to me, that’s what it means to build a data-driven culture: The answers to business questions are inside what customers are doing and are encapsulated in the data.

Try Snowflake for free. Sign up and receive $400 US dollars worth of free usage. You can create a sandbox or launch a production implementation from the same Snowflake environment.

New Snowflake Features Released in H2’17

Looking back at the first half of this year, the main theme at Snowflake has been centered around scaling – the team, the service and our customer base. We introduced and improved the self-service onboarding experience, grew our ecosystem by collaborating with major technology partners and SIs and invested in making our service faster, more secure and easier to use.

In the summer, we announced the general availability of Snowflake’s Instant Data Sharing. As we continue to grow, this will provide the foundation for data-driven organizations of all sizes to instantly exchange data, create new insights and discover new revenue streams.

In the second half of this year, we started a journey to evolve the service in substantial ways and this will continue throughout 2018. 

Instant elasticity, per-second billing and 4XL virtual warehouses
Imagine you have instant access to performance and scalability – and truly pay only for what you use.

Many cloud DW solutions make the claim in the headline above. Nonetheless, taking a closer look exposes fine differences with the degree of scalability and concurrency supported, overall user experience and the types of workloads a user can run at scale. One of our key investments in this area resulted in our instant elasticity feature:

  • The introduction of per-second pricing represents one crucial building block. Now, our customers are billed to the second instead of a full hour when running workloads in Snowflake.
  • We also improved our provisioning algorithms, using past usage data to better serve compute capacity for future use.
  • We added support for our new 4XL virtual warehouses, doubling the maximal compute configuration and allowing our customers to tackle their most challenging workloads and performance SLAs.

Based on customer feedback, the decrease in wait time from minutes to seconds is a huge benefit. You can now ensure the right performance for your executive dashboards and not worry anymore about a full-hour charge. You can also scale your load instantly and have fast access to the most current data.

Improving performance and SQL programmability
Our ongoing mission is to build the fastest data warehouse with the SQL you love.

Throughout the last quarter, we released a wide range of SQL capabilities that address database migration challenges and increase both compatibility with SQL ANSI and overall SQL programmability:

  • Support for JavaScript Table UDFs (in public preview).
  • Support for Double-Dot notation for specifying database objects to ease migration from on-premise database systems.
  • Support for Table Literals.
  • INSERT OVERWRITE: Allowing to rebuild tables to ease migration from Hadoop systems.
  • LEAD/LAG function enhancement: Ignoring NULL values.
  • SAMPLE/TABLESAMPLE – announced general availability.
  • We introduced functions to estimate frequent values in near-linear time.
  • More flexibility and support to configure start of the week and week of the year policy.

In addition, we have been working on substantial performance improvements that address ease-of-use and automation. Please stay tuned for major announcements in early 2018.

Staying ahead with enterprise-ready security and compliance
From day one, security has always been core to Snowflake’s design.

  • Programmatic SSO support – Now, a Snowflake user can leverage browser-assisted single-sign-on programmatically through our Snowflake ODBC, JDBC, and Python drivers and our command line tool, SnowSQL. In contrast to some of our competitors, you don’t need to write code to connect to your SAML 2.0-compliant IDP. (Programmatic SSO support is available in our Enterprise Edition).
  • PrivateLink support for Snowflake – We worked with AWS to integrate PrivateLink with Snowflake. With PrivateLink, Snowflake users can now connect to Snowflake by bypassing the public internet. No cumbersome proxies need to be set up between Snowflake and their network. Users have full control over egress traffic.

Improving our ecosystem and data loading
Enabling developers and builders to onboard new workloads and create applications with their favorite tools, drivers and languages remains a top priority.

Snowpipe
We recently announced Snowpipe, which represents an important milestone in many ways. First, users can now efficiently address continuous loading use cases for streaming data. Secondly, with Snowpipe, we introduced the first service in addition to the core data warehousing service. Snowpipe is serverless so there’s no need to manage a virtual warehouse for data loading into Snowflake. Finally, we continue to deeply integrate with the entire AWS stack: Snowpipe allows customers to take advantage of AWS S3 event notifications to automatically trigger Snowflake data loads into target tables. You can read more about Snowpipe here: Overview and First Steps.

Broader Ecosystem
While we continue adding functionality and enriching Snowflake with additional services, we strongly believe in freedom of choice. Instead of preaching a unified platform and a one-solution-fits-all philosophy, our key objectives are to make integrations easier and with improved performance.

  • For enterprise-class ETL, data integration and replication:
    • Added PowerCenter 10.2 support for Snowflake.
    • Added  support for the ETL use case with AWS Glue – build modern and performing ETL jobs via our Spark connector, pushing compute into Snowflake.  
  • For data analytics and data science:
    • Added a native Snowflake connector for Alteryx supporting in-DB components.
    • Upgraded our Snowflake dplyR library.  
  • For  business intelligence (BI):  
    • Added a native Snowflake connector for ChartIO ( in preview).
  • For parallel data loading & unloading via the COPY command, developers can now:
    • Load non-UTF-8 character-encoded data.
    • Unload to the Parquet file format.

Increasing transparency and usability
These features are designed to strike the right balance between offering a service that is easy to operate and exposing actionable insights into the service itself.

  • Resource monitors can now be created through the Snowflake UI. As a user creates resource monitors, they can also set up notifications. Notifications for “suspend” and “suspend immediately” actions are sent out when quota thresholds are reached for a particular resource monitor. In addition, we added  support for up to 5 warnings that can be defined for each resource monitor.
  • New MONITOR USAGE privilege to grant access to review billing and usage information via SQL or through the Snowflake UI. This provides non-account administrators with the ability to review information about data stored in databases and stages, as well as provide insight into warehouse usage.
  • For improved usability and convenience, we added a new option (COPY GRANTS) which allows users to preserve or copy grants as part of executing the CREATE OR REPLACE TABLE, CREATE OR REPLACE VIEW, CREATE TABLE LIKE, and CREATE TABLE CLONE variations.  
  • We also completely overhauled our worksheet and improved the overall experience. We are going to roll these changes out in stages in the coming weeks, with much more to come in 2018.

Scaling and investing in service robustness
These service enhancements aren’t customer visible, but are crucial for scaling to meet the demands of our rapidly growing base of customers.

  • Our global expansion continues with a new Snowflake region in Sydney, Australia.
  • We introduced a new product edition: Virtual Private Snowflake (VPS). With VPS, customers get a dedicated and managed instance of Snowflake within a separate dedicated AWS VPC, with all the performance, simplicity and concurrency inherent to Snowflake. This also includes completely dedicated metadata services isolated from the metadata activity of other Snowflake customers.
  • We continued our investments in system stability, improving and strengthening various components of our cloud services layer.

Conclusion

We are well underway to reach 1000 customers by the end of 2017, with tens of compressed PBs stored in Snowflake and millions of data processing jobs successfully executed daily. We grew to over 300 Snowflakes and expanded our global presence. 2017 has been truly a defining year in Snowflake’s young history: Self-service, instant Data Sharing, broadening our ecosystem, instant elasticity, Snowpipe, VPS and PrivateLink for Snowflake.  We look forward to an exciting 2018 as we continue to help our customers solve their data challenges at scale.

For more information, please feel free to reach out to us at info@snowflake.net. We would love to help you on your journey to the cloud. And keep an eye on this blog or follow us on Twitter (@snowflakedb) to keep up with all the news and happenings here at Snowflake Computing.

Your First Steps with Snowpipe

Please note that Snowpipe is in public preview in the US West Region of AWS. Auto Ingest will be available by February 2018. 

This blog post walks you through your first steps with deploying Snowpipe. We start with the steps for configuring Snowpipe so that it continuously loads data from Amazon S3 into a target table in Snowflake. Next, we explore the easy-to-configure mode of Snowpipe, where S3 event notifications sent to a dedicated queue in Amazon Simple Queue Service (SQS) inform Snowpipe about new data that is ready for loading. Finally, this post explains how to exert tighter programmatic control over when and what notifications are sent using the Snowpipe REST API.

If you haven’t read our blog about what Snowpipe is, click here, to understand how Snowpipe works and the business benefits it provides. Otherwise, keep reading this blog and learn how to configure Snowpipe.

Snowpipe one-time configuration in SQL

The following SQL statements show the one-time configuration experience for setting up Snowpipe. They include familiar DDL, such as creating an external stage and a new table, as well as how to create a pipe which is a new database object in Snowflake.

In the example below, we use a VARIANT column in Snowflake to store incoming data. Semi-structured data types in Snowflake are particularly well-suited for streaming or continuous data loading scenarios. They allow for seamless schema evolution in your application, over time, without making any changes to the Snowflake table schema. Snowflake automatically applies an efficient columnar representation to the data while it’s being loaded.

This makes it easy to leave the incoming continuous data in its original format. Note, a conversion into a different format such as Parquet is not needed for efficient loading or querying in Snowflake:

use role snowpipe_role;
use database snowpipe;

create or replace stage snowpipe.public.snowstage
    url='s3://snowpipe-demo/'
    credentials = (AWS_KEY_ID = '...' AWS_SECRET_KEY = '...' );
show stages;

-- Create target table for JSON data
create or replace table snowpipe.public.snowtable(jsontext variant);
show tables;

-- Create a pipe to ingest JSON data
create or replace pipe snowpipe.public.snowpipe auto_ingest=true as
    copy into snowpipe.public.snowtable
    from @snowpipe.public.snowstage
    file_format = (type = 'JSON');
show pipes;

The key database concept that Snowpipe introduces to Snowflake’s SQL language is called a “pipe“. Think of a pipe as a wrapper around Snowflake’s familiar COPY statement for data loading. A Snowpipe pipe continuously looks for new data, then loads it using the COPY statement in the pipe definition.

In the current example, the pipe is defined with AUTO_INGEST=true>, which tells Snowflake to use an SQS queue in AWS to receive event notifications from an S3 bucket pertaining to new data that is ready to load. The name of the SQS queue is shown in a new column within the results of the SHOW STAGES command. This queue name is used to configure S3 event notifications.

Snowpipe one-time configuration in AWS S3

With Snowpipe auto-ingest, available in February 2018, the remaining step requires configuring event notifications for the S3 bucket so the pipe recognizes when there is new data available for loading. Auto-ingest relies on SQS queues to deliver the notifications from S3 to Snowpipe. All Snowpipe SQS queues are created and managed by Snowflake so you don’t have to worry about managing yet another AWS service for your application.

The name of the SQS queue is available from the SHOW STAGES command in a new column called NOTIFICATION_CHANNEL. The following screenshot shows the event notification configuration for the S3 bucket. Note the use of the name (or ARN, to be precise) of the SQS queue at the bottom of the dialog box along and the check mark for the ObjectCreate notifications.

 

After saving this notification configuration, place new files in the bucket. Snowflake will receive automatic notifications about the files from S3. Then, Snowpipe will automatically begin loading them into whatever table you have defined as the target in your pipe configuration.

Continuously delivering data

Snowpipe requires placing data into S3 before loading.This design choice takes advantage of the rich ecosystem of tools designed for storing data in S3. Snowpipe’s continued use of S3 as the stage for data loading allows you to use the tool of your choice for delivering data in an S3 bucket. This includes services such as AWS Kinesis with Firehose, and Kafka with its S3 connector. Both are popular choices for transporting continuous and streaming data.

The architectural take-away from this is important: Snowflake can automatically and continuously load data delivered to S3 from Kinesis or Kafka.

Querying continuously loading data

A simple, but useful, example query for Snowpipe tracks the arrival of rows over time with a JSON field called 'created_at':

select
    date_trunc('MINUTE', to_timestamp(jsontext:created_at)),
    count(*)
from snowpipe.public.snowtable
group by date_trunc('MINUTE', to_timestamp(jsontext:created_at));

This illustrates how many rows arrived each minute, serving as a starting point for more expressive analytics over time windows.

It’s worth pointing out the powerful date, time and windowing functions Snowflake’s SQL language provides. This makes it easy to analyze data, as it arrives over time, and group it into time windows for examining trends.

Full control for Snowpipe

Using S3 event notifications to tell Snowpipe about new data to load from an S3 bucket may not be appropriate in some use cases. Consider an application that first needs to perform some processing in AWS Lambda before performing the load into Snowflake. In that case, we recommend timing notifications to Snowpipe more tightly.

Snowpipe allows for defining pipes with AUTO_INGEST set to ‘false’ while using exactly the same DDL, as shown above, to create stages and pipes. Without automatic ingestion pipes, however, an application needs to invoke a REST API with the filename, or a set of file names, to tell Snowpipe that the file is ready for loading. For instance, you can call the Snowpipe REST API after finishing the preprocessing from within your AWS Lambda code.

In general, any application capable of invoking a REST API can notify Snowpipe about new files. To make developing against the REST API easier, SDKs for both Python and Java are available. The following code snippet shows an example of the proxy-like abstraction that the Python SDK provides. Note how the ingest_files call can consume a set of files in addition to just a single file:

ingest_manager = SimpleIngestManager(account='...',
                                     user='...',
                                     pipe='...',
                                     private_key=...)
staged_file_list = []
staged_file_list.append(StagedFile(filename, None))
resp = ingest_manager.ingest_files(staged_file_list)

You can find the latest SDK versions on PyPI or Maven Central by using the search terms “Snowflake ingest”.

Try Snowpipe today

Please give Snowpipe a spin today and let us know your feedback. Snowpipe using REST-based notifications is now available in US West. You can find the documentation and information on how to get started here.

Snowpipe with auto-ingest using SQS will be available for preview in December, 2017. If you are interested in participating in a private preview for this capability, please let us know here.

You can also try Snowflake for free. Sign up and receive $400 US dollars worth of free usage. You can create a sandbox or launch a production implementation from the same Snowflake environment.

How to Convert JSON to SQL

One of the key reasons I joined  Snowflake is its built-in support to load and query semi-structured data such as JSON. In most conventional data warehouse and Big Data environments, you have to first load this type of data to a Hadoop or NoSQL platform. Then, you must parse it with a product such as MapReduce so you can load the data into tables in a relational database. Then, and only then, you’re ready to analyze the data with SQL queries or a BI/Analytics tool. But why take extra steps when there is a faster, easier way to get the job done?

With Snowflake, you can load your semi-structured data direct into a relational table, query the data with a SQL statement and then join it to other structured data – all while not fretting about future changes to the schema of that data. Snowflake actually keeps track of the self-describing schema so you don’t have to. No ETL or shredding required.

This means you can leverage your existing knowledge and skills in SQL to jump into the world of big data. Even with this feature, there is still a tiny bit to learn. However, it’s easy with Snowflake’s extensions to SQL. To get you started, we have produced a handy ebook that takes you step-by-step through loading some JSON into Snowflake, then querying that data with SQL. The book is called How to analyze JSON with SQL: Schema-on-read made easy.

Get your free ebook now and start your journey to analyzing big data in the cloud!

Be sure to keep an eye on this blog or follow us on Twitter (@snowflakedb and @kentgraziano) for all the news and happenings here at Snowflake.

P.S. If you don’t already have a Snowflake account, you can sign up for a self-service account here and get a jumpstart with $400 in free credits!

 

What makes Snowflake a data warehouse?

One of the most common questions I get when speaking to people about Snowflake is: “Why do you call it a data warehouse and not a database?” This is a very reasonable question given some of the characteristics of Snowflake.

At Snowflake, in part, we say we are a full relational database management system (RDBMS) built for the cloud. We are ACID compliant and we support standard SQL. Sounds like a database to me, too. Let’s take a closer look just to be sure.

What is a database?

A database is a collection of information organized to be easily accessed, managed and updated. While there are many types of databases available today, the most common is an RDBMS. But when most folks say “database”, they usually mean a traditional RDBMS that handles Online Transaction Processing (OLTP).

So, what are some of the defining characteristics of an OLTP database?

  • Designed for rapid storage and retrieval of small sets of current data records in support of transactions and interactions within an enterprise.
  • Data is organized in tables and columns, allowing users access via structured query language (SQL).
  • Handles quick, real-time activity such as entering a customer name, recording a sale and recording all accounting activity of that sale.
  • Works well for basic operational reporting of a limited number of records. Analytic reporting is relegated to simple, static reports often driven by IT.

What is a data warehouse?

Some of the defining characteristics of a data warehouse are:

  • A database designed to store and process large volumes of current and historical data collected from multiple sources inside and outside the enterprise for deep analysis.
  • Organizes data into tables and columns, and allows users access via SQL.
  • Optimized for loading, integrating and analyzing very large amounts of data.
  • Designed to support descriptive, diagnostic, predictive and prescriptive analytic workloads.

Snowflake definitely includes the overlapping characteristics of both a database and a data warehouse-ACID compliant, support for standard SQL, etc. But Snowflake also embodies all of the defining characteristics of a data warehouse.

One of the key differentiators of Snowflake, from other solutions, is that it’s specifically designed for data warehousing and high speed analytic processing. Rather than a generalized SQL database that has been “tuned” or even adapted to handle these type of workloads, Snowflake was built from the ground up for the cloud to optimize loading, processing and query performance for very large volumes of data. Therefore, hands down, Snowflake is a data warehouse.

So, why do we still need a specialized data warehouse engine?

As OLTP databases have been able to scale higher and innovations like in-memory databases have emerged, some organizations have questioned whether they still need a separate technology or specialized system for reporting and analytics. The answer, again, requires us to look at the basics: What benefits emerge from storing and analyzing data in a separate system?

  1. It eases the burden of reporting from transactional systems by removing the contention for limited and expensive resources.
  2. It produces more business-friendly data results by allowing the data to be restructured to a more suitable format.
  3. It provides access to a wider array of reports more quickly because all the resources in the data warehouse are dedicated to reporting and analysis.
  4. It integrates valuable data from across the enterprise for richer insight. Something that can’t (and shouldn’t) be done in an OLTP system.

For more information on how you can up your data warehousing game with a modern, built-for-the-cloud approach, check out some of our free resources such as our ebook The Data Warehouse: The Engine That Drives Analytics. We would love to help you on your journey to the cloud so keep an eye on this blog or follow us on Twitter (@snowflakedb and @kentgraziano) to keep up with all the news and happenings here at Snowflake.

New Snowflake features released in Q1’17

We recently celebrated an important milestone in reaching 500+ customers since Snowflake became generally available in June 2015. As companies of all sizes increasingly adopt Snowflake, we wanted to look back and provide an overview of the major new Snowflake features we released during Q1 of this year, and highlight the value these features provide for our customers.

Expanding global reach and simplifying on-boarding experience

Giving our customers freedom of choice, along with a simple, secure, and guided “Getting Started” experience, was a major focus of the last quarter.

  • We added a new region outside of the US; customers now have the option to analyze and store their data in Snowflake accounts deployed in EU-Frankfurt. Choosing the appropriate region is integrated into our self-service portal when new customers sign up.
  • In addition, we added our high-value product editions, Enterprise and Enterprise for Sensitive Data (ESD), to our self-service offerings across all available regions. For example, with Enterprise, customers can quickly implement auto-scale mode for multi-cluster warehouses to support varying, high concurrency workloads. And customers requiring HIPAA compliance can choose ESD.
  • Exploring other venues for enabling enterprises to get started quickly with Snowflake, we partnered with the AWS Marketplace team to include our on-demand Snowflake offerings, including the EU-Frankfurt option, in their newly-launched SaaS subscriptions.

Improving out-of-the-box performance & SQL coverage

We are committed to building the fastest cloud DW for your concurrent workloads with the SQL you love.

  • One key performance improvement introduced this quarter was the reduction of compilation times for JSON data. Internal TPC-DS tests demonstrate a reduction between 30-60% for most of the TPC-DS queries (single stream on a single, 100TB JSON table). In parallel, we worked on improving query compile time in general, providing up to a 50% improvement in performance for short queries.
  • Another new key capability is the support for bulk data inserts on a table concurrently with other DML operations (e.g. DELETE, UPDATE, MERGE). By introducing more fine-grained locking at the micro-partition level, we are able to allow concurrent DML statements on the same table.
  • To improve our data clustering feature (currently in preview), we added support for specifying expressions on table columns in clustering keys. This enables more fine-grained control over the data in the columns used for clustering.
  • Also, we reduced the startup time for virtual warehouses (up to XL in size) to a few seconds, ensuring almost instantaneous provisioning for most virtual warehouses.
  • We extended our SQL by adding support for the ANSI SQL TABLESAMPLE clause. This is useful when a user wants to limit a query operation performed on a table to only a random subset of rows from the table.

Staying Ahead with Enterprise-ready Security

From day one, security has always been core to Snowflake’s design.

  • We expanded Snowflake’s federated authentication and single sign-on capability by integrating with many of the most popular SAML 2.0-compliant identity providers. Now, in addition to Okta, Snowflake now supports ADFS/AD, Azure AD, Centrify, and OneLogin, to name just a few.
  • To advance Snowflake’s built-in auditing, we introduced new Information Schema table functions (LOGIN_HISTORY and LOGIN_HISTORY_BY_USER) that users can query to retrieve the short-term history of all successful and failed login requests in the previous 7 days. If required, users can maintain a long-term history by copying the output from these functions into regular SQL tables.

Improving our ecosystem

Enabling developers and builders to create applications with their favorite tools and languages remains a high priority for us.

  • With respect to enterprise-class ETL, we successfully collaborated with Talend in building a native Snowflake connector based on Talend’s new and modern connector SDK. The connector, currently in preview, has already been deployed by a number of joint customers with great initial feedback on performance and ease-of-use.
  • To tighten the integration of our Snowflake service with platforms suited for machine learning and advanced data transformations, we released a new version of our Snowflake Connector for Spark, drastically improving performance by pushing more query operations, including JOINs and various aggregation functions, down to Snowflake. Our internal 10 TB TPC-DS performance benchmark tests demonstrate that running TPC-DS queries using this new v2 Spark connector is up to 70% faster compared to executing SQL in Spark with Parquet or CSV (see this Blog post for details).
  • We continue to improve our drivers for our developer community. Listening to feedback from our large Python developer community, we worked on a new version of Snowflake’s native Python client driver, resulting in up to 40% performance improvements when fetching result sets from Snowflake. And, after we open-sourced our JDBC driver last quarter, we have now made the entire source code available on our official GitHub repository.
  • And, last, but not least, to enhance our parallel data loading via the COPY command, ETL developers can now dynamically add file metadata information, such as the actual file name and row number, which might not be part of the initial payload.

Increasing transparency and usability

These features are designed to strike the right balance between offering a service that is easy to operate and exposing actionable insights into the running service.

  • One major addition to our service is Query Profile, now general available and fully integrated into Snowflake’s web interface. Query Profile is a graphical tool you can use to detect performance bottlenecks and areas for improving query performance.
  • Various UI enhancements were implemented: Snowflake’s History page now supports additional filtering by the actual SQL text and query identifier. We also added UI support for creating a Parquet file format in preparation for loading Parquet data into variant-type table columns in Snowflake.
  • A new Information Schema table function (TABLE_STORAGE_METRICS) exposes information about the data storage for individual tables. In particular, a user can now better understand how tables are impacted by Continuous Data Protection, particularly Time Travel and Fail-safe retention periods, as well as which tables contain cloned data.
  • We also recently introduced smarter virtual warehouse billing through Warehouse Billing Continuation (see this Blog post for details). If a warehouse is suspended and resumed within 60 minutes of the last charge, we do not charge again for the servers in the warehouse. WBC eliminates additional credit charges, and we hope it will reduce the need for our customers to strictly monitor and control when warehouses are suspended and resized.

Scaling and investing in service robustness

These service enhancements aren’t customer visible, but are crucial for scaling to meet the demands of our rapidly growing base of customers.

  • As part of rolling out the new EU (Frankfurt) region, we increased automation of our internal deployment procedures to (a) further improve engineering efficiency while (b) laying the foundation for rapidly adding new regions based on customer feedback.
  • We further streamlined and strengthened our various internal testing and pre-release activities, allowing us to ship new features to our customers on a weekly basis – all in a fully transparent fashion with no downtown or impact to users.

Conclusion and Acknowledgements

This summary list of features delivered in Q1 highlights the high velocity and broad range of features the Snowflake Engineering Team has successfully delivered in a short period of time. We are committed to putting our customers first and maintaining this steady pace of shipping enterprise-ready features each quarter. Stay tuned for another feature-rich Q2.

For more information, please feel free to reach out to us at info@snowflake.net. We would love to help you on your journey to the cloud. And keep an eye on this blog or follow us on Twitter (@snowflakedb) to keep up with all the news and happenings here at Snowflake Computing.

RI (Referential Integrity) Constraints: 3 Reasons to Include Them in Your Data Warehouse

Over the years, I have had numerous conversations about the value of having referential integrity (RI) constraints, such as primary and foreign keys, in a relational data warehouse or data mart.

Many DBAs object that RI constraints slow the load process. This is a valid point if you are talking about enforced constraints that are checked in real time during the load. But this is not an issue if you define the constraints as disabled.

Which then leads to this common question:

Is there any reason to maintain a permanently disabled FK in the data model?  If it is not going to be enabled, then from my perspective, it doesn’t make any sense to define the FK.  Instead, the relationship can be described in the comment of the child column.

So, why would I want RI constraints in my data warehouse?

Mostly it has to do with good design and development best practices. Here is my rationale for why you should consider including RI constraints in your data warehouse design.

#1 – Design Metadata

RI constraints are valuable metadata/documentation. If somebody reverse engineers the database (say with ERWin or Oracle Data Modeler), the PKs and FKs show up in the diagram (much better than having to read a column comment to discover a relationship). This is quite valuable for new people on your project to orient themselves to the existing schema design and understand how the various tables in your data warehouse are related to each other.

RI in a diagram
A picture is worth a thousand words

#2 – BI Metadata

If you want to use any sort of reporting or BI tool against the database (it is a data warehouse,  after all), most modern business intelligence and visualization tools import the foreign key definitions with the tables and build the proper join conditions. This is much better than having someone guess what the join will be and then manually adding it to the metadata layer in the reporting tool. This also ensures that different developers don’t interpret the joins differently.

Examples of tools that can read the Snowflake data dictionary include Looker, Tableau, COGNOS, MicroStrategy, and many others. Some of these tools actually use the FK definitions for join culling to provide better query performance.

#3 – QA your ETL/ELT code

I know you think your ETL code is perfect.

But does every developer test to the same standards? Do you maybe have a QA team who separately validates that the ETL is doing what you expect?

If so, having declared primary, unique, and foreign key constraints in your data warehouse gives the team more information they can use to ensure the quality of the data. In fact, using the Snowflake Information Schema, a QA engineer can potentially generate SQL to test that the loaded data conforms to the defined constraints.

Defining RI Constraints in Snowflake

You, of course, can (and IMHO should) define RI constraints in Snowflake. You can define primary keys, unique keys, foreign keys, and NOT NULL constraints. Because Snowflake is specifically engineered for data warehousing, only the NOT NULL constraints are enforced. The rest are always created as disabled.

The syntax is standard SQL. You can define the constraints both inline and out-of-line.

Here is a simple example of inline constraints:

Create or replace TABLE SAT_REGIONS (
HUB_REGION_KEY NUMBER(38,0) NOT NULL,
SAT_LOAD_DTS DATE NOT NULL,
REGION_COMMENT VARCHAR(152),
HASH_DIFF VARCHAR(32) NOT NULL,
SAT_REC_SRC VARCHAR(50) NOT NULL,
constraint SAT_REGIONS_PK primary key (HUB_REGION_KEY, SAT_LOAD_DTS),
constraint SAT_REGIONS_FK1 foreign key (HUB_REGION_KEY)
references KENT_DB.DATAVAULT_MAIN.HUB_REGION(HUB_REGION_KEY)
);

Example of an out-of-line constraint:

ALTER TABLE SAT_REGIONS ADD constraint 
SAT_REGIONS_PK primary key (HUB_REGION_KEY, SAT_LOAD_DTS);

It’s that easy. So why don’t you have constraints in your data warehouse?

 

Did you hear? Snowflake was declared the #1 Cloud Data Warehouse in a recent GigaOM analyst report. Go here to get your copy of the report.

As always, keep an eye on this blog site, our Snowflake Twitter feeds (@SnowflakeDB), (@kentgraziano), and (@cloudsommelier) for updates on all the action and activities here at Snowflake Computing.