We know that even the smallest service disruptions can cause inconveniences on your end  and not being able to find information about what's happening can be even more frustrating. We do our best to make sure outages don't happen, but when they do, we think it's important to be transparent and communicate those issues.

Starting today, you'll be able to receive status updates for Google Cloud Platform services on the Google Cloud Platform Status Dashboard. This augments the additional tools and services  including Google Cloud Monitoring  that monitor your service's health. We hope that these services will make disruptions a bit more bearable by surfacing the latest information and helping you quickly find workarounds.

As disruptions in service occur (and we’re working very hard to ensure they don’t!), they’re reported on the dashboard with a red bar, which persists until the disruption is resolved. The example below shows an example of a Google App Engine service disruption that occurred last month on Jan 28th.
The current status of Cloud Platform services is shown by the column of indicators on the right side of the graph. In the example above, the indicators are all green which means that all services were operating at normal levels at the time the screenshot was taken. You can also check out the status history of each service  the dashboard will always show the last seven days, but just click on “View Summary and History” to see any of the incidents reported over the past 90 days.

Click on any incident to get a more detailed explanation of what happened. The screenshot below details the above incident that began at 2015-01-28 17:01 and lasted 26 minutes. As you can see, the green indicators next to the dates and times mean the incident has since been resolved.
Stay up to date by subscribing to the Status Dashboard RSS feed, available from the link at the bottom of the Status Dashboard page. We have also integrated reporting of Cloud Platform service incidents in your Cloud Monitoring events log, enabling you to view incidents alongside your other dashboards and monitoring data.
While in beta, we’re reporting status for seven services, aggregated across all regions. The Status Dashboard does not deprecate any other means of communicating outages.

We’d love to hear your feedback and suggestions. Please submit your feedback by clicking the link at the bottom right of the dashboard page.

- Posted by Amir Hermelin, Product Manager

In recent weeks we’ve published a number of posts as part of our series on Kubernetes and Google Container Engine. If this is your first foray into these blogs, we suggest you check out past Kubernetes blog posts.

Containers are emerging as a new layer of abstraction that make it easier to get the most out of your VM infrastructure. In this post, we'll take a look at the implications of running container-based applications on fleets of VMs and we’ll talk about why container clusters reduce deployment risk, foster more modular apps, and encourage sharing of resources.

Container Images
The first building block of a containerized application is the container image. This is a self-contained, runnable artifact, which brings with it all of the dependencies necessary to run a particular application component. The VM analogy is an ISO image, which usually contains an entire operating system and everything else installed on the machine. Unlike an ISO, a container image holds only a single application component and can be booted as a running container that shares an OS and host machine with other containers. The same app running on containers could be several GBs smaller depending on your Linux distro and the number of VMs. That will mean faster deployments and easier management.

Reducing Deployment Risk
You may have experienced deploying your application onto VMs in production  only to find that something has gone horribly wrong and you need to rollback quickly. The code may have worked on the developer’s machine, but once you run your deployment process you discover that an installation is failing for some unknown reason.

With container images, you can run an offline process (meaning not during deployment) that produces a reusable artifact that can be deployed to a container cluster. In this model, issues that would affect your deployment (like installation failures) are caught earlier and out of the critical path to production. This means you have more time to react and correct any issues, and rolling back is easier and less risky  just replace the container image with the previous version.

Modular App Components
As you're designing and building your application, it’s tempting to just add more pieces onto your existing VMs. The hard part is unwinding these pieces into modular chunks that can be scaled independently. When you suddenly run out of VM capacity, you can’t deliver a reliable service to your users. So it’s important to quickly add resources without re-architecting.

When you create a Kubernetes container cluster (for example, via Google Container Engine) you’re giving your app logical compute, memory, and storage resources. And it’s really easy to add more. Since your application components don’t care where they run, you have two independent tasks to complete:
  1. Create a fleet of VMs to host your containers
  2. Create and run containers on your fleet of virtual machines
Using containers for your application components and using Kubernetes as an abstraction layer makes your app naturally more modular. Of course, it’s possible to have modularity on VMs with well designed scripts, but with containers it’s hard not to design modular applications!

Shared Resources and Forecasting
For your application containers to run together on arbitrary computers, they need an agreement about what they're allowed to do. Container clusters establish a declarative contract between resource needs and resource availability. We don’t recommend that you use containers as secure trust boundaries, but running trusted containers together and relying on VMs let’s you get the most utilization within your existing VM boundaries.

Another problem you may face is how to forecast capacity across multiple people and applications. Your team can use Kubernetes to share machines while being protected from noisy neighbors via resource isolation in the kernel. Now you can see resources across your teams and apps, aggregating numerous usage signals that might be misleading on their own. You can forecast this aggregate trend into the future for more cost effective use of hardware resources.

The decoupling of applications from the container cluster separates the operational tasks of managing applications and the underlying machines. Modular applications are easier to scale and maintain. Building container images before deployment reduces the risk that you’ll discover installation problems when it’s too late. And sharing resources leads to better utilization and forecasting, making your cloud smarter.

If you’d like to take container clusters for a spin yourself, sign up for a free trial and head on over to Google Container Engine.

-Posted by Brendan Burns, Software Engineer

Many developers containerize their application so that it can run on any infrastructure; however, it’s still too hard to run containers on a private cloud. Together with Mirantis, we’ve integrated Kubernetes, our open source container manager, in OpenStack. This integration will make it easier to run your apps on a private cloud, while enabling new “hybrid” cloud possibilities. To learn more, sign up for the waiting list.

New “Hybrid” Possibilities
If your company is bigger than a startup, you probably have both on-premises and public cloud infrastructure to host your portfolio of apps. This “hybrid” approach is great in theory: your on-premises infrastructure offers control and you can scale to the public cloud when necessary. Unfortunately, it’s not always easy to take advantage of this flexibility—it’s still too hard to move workloads between infrastructures.

With Kubernetes powering both your private and public clouds, you’ll be able to unlock the power of a hybrid infrastructure. For example, you might run a primary instance of your application in a private cloud, and then replicate other instances to Google Container Engine in geographies where you don’t have on-premises infrastructure.

Learn More
To learn more about how we’re working together with Mirantis, read their blog post. And feel free to stop by the Kubernetes Gathering on February 25th in San Francisco to see Mirantis give a full demo.

-Posted by Kit Merker, Product Manager, Google Cloud Platform

Deploying a new build is a thrill, but every release should be scanned for security vulnerabilities. And while web application security scanners have existed for years, they’re not always well-suited for Google App Engine developers. They’re often difficult to set up, prone to over-reporting issues (false positives)—which can be time-consuming to filter and triage—and built for security professionals, not developers.

Today, we’re releasing Google Cloud Security Scanner in beta. If you’re using App Engine, you can easily scan your application for two very common vulnerabilities: cross-site scripting (XSS) and mixed content.

While designing Cloud Security Scanner we had three goals:
  1. Make the tool easy to set up and use
  2. Detect the most common issues App Engine developers face with minimal false positives
  3. Support scanning rich, JavaScript-heavy web applications
To try it for yourself, select Compute > App Engine > Security scans in the Google Developers Console to run your first scan, or learn more here.

So How Does It Work?
Crawling and testing modern HTML5, JavaScript-heavy applications with rich multi-step user interfaces is considerably more challenging than scanning a basic HTML page. There are two general approaches to this problem:

  1. Parse the HTML and emulate a browser. This is fast, however, it comes at the cost of missing site actions that require a full DOM or complex JavaScript operations.
  2. Use a real browser. This approach avoids the parser coverage gap and most closely simulates the site experience. However, it can be slow due to event firing, dynamic execution, and time needed for the DOM to settle.
Cloud Security Scanner addresses the weaknesses of both approaches by using a multi-stage pipeline. First, the scanner makes a high speed pass, crawling, and parsing the HTML. It then executes a slow and thorough full-page render to find the more complex sections of your site.

While faster than a real browser crawl, this process is still too slow. So we scale horizontally. Using Google Compute Engine, we dynamically create a botnet of hundreds of virtual Chrome workers to scan your site. Don’t worry, each scan is limited to 20 requests per second or lower.

Then we attack your site (again, don’t worry)! When testing for XSS, we use a completely benign payload that relies on Chrome DevTools to execute the debugger. Once the debugger fires, we know we have JavaScript code execution, so false positives are (almost) non-existent. While this approach comes at the cost of missing some bugs due to application specifics, we think that most developers will appreciate a low effort, low noise experience when checking for security issues—we know Google developers do!

As with all dynamic vulnerability scanners, a clean scan does not necessarily mean you’re security bug free. We still recommend a manual security review by your friendly web app security professional.

Ready to get started? Learn more here. Cloud Security Scanner is currently in beta with many more features to come, and we’d love to hear your feedback. Simply click the “Feedback” button directly within the tool.

-Posted by Rob Mann, Security Engineering Manager

For those of you developing applications on the cloud, performance is often a critical concern. It turns out that it’s surprisingly difficult to evaluate cloud offerings beyond just looking at price or feature charts. When we looked at how our own users could measure the relative performance of Google Cloud Platform, it was clear they struggled with this exact problem.

We wanted to make the evaluation of cloud performance easy, so we collected input from other cloud providers, analysts, and experts from academia. The result is a cloud performance benchmarking framework called PerfKit Benchmarker. PerfKit is unique because it measures the end to end time to provision resources in the cloud, in addition to reporting on the most standard metrics of peak performance. You'll now have a way to easily benchmark across cloud platforms, while getting a transparent view of application throughput, latency, variance, and overhead.

We created a visualization tool, Perfkit Explorer, to help you interpret the results. We’re including a set of pre-built dashboards, along with data from actual network performance internal tests. This way, you'll be able to play with the PerfKit Explorer without having to first input your data.

We’re releasing the source code under the ASLv2 license, making it easy for contributors to collaborate and maintain a balanced set of benchmarks. If you want something to be removed or added, we welcome your participation through github.

PerfKit is a living benchmark framework, designed to evolve as cloud technology changes, always measuring the latest workloads so you can make informed decisions about what’s best for your infrastructure needs. As new design patterns, tools, and providers emerge, we'll adapt PerfKit to keep it current. It already includes several well-known benchmarks, and covers common cloud workloads that can be executed across multiple cloud providers.
Sample Dashboard of Compute Performance
Over the last year, we’ve worked with over 30 leading researchers, companies, and customers and we're grateful for their feedback and contributions. Those companies include: ARM, Broadcom, Canonical, CenturyLink, Cisco, CloudHarmony, CloudSpectator, EcoCloud/EPFL, Intel, Mellanox, Microsoft, Qualcomm Technologies, Inc., Rackspace, Red Hat, Tradeworx Inc., and Thesys Technologies LLC. In addition, we’re excited that Stanford and MIT have agreed to lead a quarterly discussion on default benchmarks and settings proposed by the community.

We hope you find the tools useful and easy to use.

- Posted by the Google Cloud Platform Performance Team

From popular mobile apps (Foursquare) to acclaimed indie films (The Grand Budapest Hotel), some of the world’s most creative ideas have debuted at the annual SXSW Festival in Austin, Texas. For over 25 years, SXSW's goal has been to bring together the most creative people and companies to meet and share ideas. We think one of those next great ideas could be yours, and we’d like to help it be a part of SXSW.

Do you have an idea for a new app that you think is SXSW worthy? Enter it in the Google Cloud Platform Build-Off. Winners will receive up to $5,000 in cash. First prize also includes $100,000 in Google Cloud Platform credit and 24/7 support, and select winners will have the chance to present their app to 200 tech enthusiasts during the Build-Off awards ceremony at SXSW.

Here’s how it works:

  • Develop an app on Google Cloud Platform that pushes the boundary on what technology can do for music, film or gaming
  • Enter on your own or form teams of up to 3 members
  • Submit your application between 5 - 28 February 2015
  • Apps will be evaluated based on their originality, effectiveness in addressing a need, use of Google tools, and technical merit

Visit the official Build-Off website to see the full list of rules and FAQs and follow us at #GCPBuildOff on G+ and Twitter for more updates. We cannot wait to see what innovation your creativity leads to next.

- Posted by Greg Wilson, Head of Developer Advocacy


The real-time web is increasingly all around us tracking your ride approaching on a map while you wait, seeing up-to-date statistics on a live dashboard, getting fresh messages about your friends’ social activities. These are all ways in which we’ve come to expect our online experience to reflect reality in real-time. Bringing users timely insight requires more sophisticated handling of data data that's being generated and processed in ever-increasing volume and diversity.

By real-time, we don’t mean it in the strict sense used by hardware controllers, but near real-time information generally perceived to be more or less immediate, useful within seconds rather than minutes. The diversity of solutions available to address real-time oriented problems is great. Many tools exist to help tackle different pieces of the real-time application puzzle. Google Cloud Platform provides a wide range of components to help you build real-time oriented solutions. Examples include high performance infrastructure, such as Google Compute Engine virtual machines, advanced container capabilities in the form of Google Container Engine, stream-oriented big data services, as well as a fully managed database oriented toward real-time applications, Firebase.

We’ve recently documented two tutorials demonstrating how you can compose platform services into specific solutions for real-time oriented problems. Both leverage the power of Google BigQuery to perform fast, insight-yielding queries on large amounts of data that grow and update in real-time relative to their sources. BigQuery supports a streaming option for loading data and these two initial solutions both leverage that API.

The first tutorial outlines how to perform sentiment analysis on a Twitter stream using BigQuery. To handle the high volume intake, the tutorial demonstrates how to use Kubernetes-managed containers to build an asynchronous, real-time pipeline that buffers Twitter data in Redis before streaming into BigQuery. The second tutorial demonstrates how to use the log forwarding tool Fluentd with a BigQuery connector to stream log data into BigQuery. This data can then be visualized with a chart directly in Google Spreadsheets with native Apps Script support for BigQuery.

On the mobile front, Firebase is a managed database with a powerful, developer-friendly API to store and sync data in real-time. Mobile and web client side SDKs automatically sync local data state whenever the data is changed in the database. Particularly useful for mobile developers is the fact that this synchronization is completely tolerant of offline local data changes – even when users go into airplane mode or descend into the subway.

This is just a glimpse into the myriad of ways in which tomorrow's real-time problems can be solved today on Google Cloud Platform. We'll continue to add other real-time oriented solutions and information to the section on solutions for real-time.

-Posted by Preston Holmes, Cloud Solutions Architect