Official CoreOS images are now available on Google Compute Engine
Friday, May 23, 2014
Our guest blog post today comes from Brandon Philips, CTO at CoreOS, a new Linux distribution that has been rearchitected to provide features needed to run massive server deployments.
Google is an organization that fundamentally understands distributed systems, and it's no surprise that Compute Engine is a perfect base for your distributed applications running on CoreOS. The clustering features in CoreOS pair perfectly with VMs that boot quickly and have a super-fast network connecting them.
Google's wide variety of machine types allow you to create the most efficient cluster for your workloads. By setting machine metadata, CPU intensive or RAM hungry fleet units can be easily scheduled onto a subset of the cluster optimized for that workload.
CoreOS integrates easily with Google load balancers and replica pools to easily scale your applications across regions and zones. Using replica groups with CoreOS is easy; configure the project-level metadata to include a discovery URL and add as many machines as you need. CoreOS will automatically cluster new machines and fleet will begin utilizing them. If a single machine requires more specific configuration, additional cloud-config parameters can be specified during boot.
The largest advantage to running on a cloud platform is access to platform services that can be used in conjunction with your cloud instances. Running on Compute Engine allows you to connect your front-end and back-end services running on CoreOS to a fully managed Cloud Datastore or Cloud SQL database. Applications that store user-generated content on Google Cloud Storage can easily start worker instances on the CoreOS cluster to process items as they are uploaded.
CoreOS uses cloud-config to configure machines after boot and automatically cluster them. Automatic clustering is achieved with a unique discovery token obtained from discovery.etcd.io.
Place this new discovery token into your cloud-config document:
After generating your cloud-config, booting a 3-machine cluster can be done in a single command. Remember to substitute your unique project ID:
To show off fleet’s scheduling abilities, let’s submit and start a very simple Docker container that echoes a message. First, SSH onto one of the machines in the cluster. Remember to replace the project ID with your own:
Create a new unit file on disk that runs our container:
To run this unit on your new cluster, submit it via fleetctl:
The status of the registry container can easily be fetched via fleetctl:
Using this fundamental tooling you can start building full distributed applications on top of CoreOS and Google Compute Engine. Checkout the CoreOS blog for more examples of using fleet, load balancers and more.
For a complete guide on running CoreOS on Google Compute Engine, head over to the docs. To get help or brag about your awesome CoreOS setup, join us on the mailing list or in IRC.
Google is an organization that fundamentally understands distributed systems, and it's no surprise that Compute Engine is a perfect base for your distributed applications running on CoreOS. The clustering features in CoreOS pair perfectly with VMs that boot quickly and have a super-fast network connecting them.
Google's wide variety of machine types allow you to create the most efficient cluster for your workloads. By setting machine metadata, CPU intensive or RAM hungry fleet units can be easily scheduled onto a subset of the cluster optimized for that workload.
CoreOS integrates easily with Google load balancers and replica pools to easily scale your applications across regions and zones. Using replica groups with CoreOS is easy; configure the project-level metadata to include a discovery URL and add as many machines as you need. CoreOS will automatically cluster new machines and fleet will begin utilizing them. If a single machine requires more specific configuration, additional cloud-config parameters can be specified during boot.
The largest advantage to running on a cloud platform is access to platform services that can be used in conjunction with your cloud instances. Running on Compute Engine allows you to connect your front-end and back-end services running on CoreOS to a fully managed Cloud Datastore or Cloud SQL database. Applications that store user-generated content on Google Cloud Storage can easily start worker instances on the CoreOS cluster to process items as they are uploaded.
CoreOS uses cloud-config to configure machines after boot and automatically cluster them. Automatic clustering is achieved with a unique discovery token obtained from discovery.etcd.io.
$ curl https://discovery.etcd.io/new https://discovery.etcd.io/b97f446100a293c8107500e11c34864b
Place this new discovery token into your cloud-config document:
$ cat cloud-config.yaml #cloud-config coreos: etcd: # generate a new token for each unique cluster from https://discovery.etcd.io/new discovery: https://discovery.etcd.io/b97f446100a293c8107500e11c34864b # multi-region and multi-cloud deployments need to use $public_ipv4 addr: $private_ipv4:4001 peer-addr: $private_ipv4:7001 units: - name: etcd.service command: start - name: fleet.service command: start
After generating your cloud-config, booting a 3-machine cluster can be done in a single command. Remember to substitute your unique project ID:
gcutil --project=<project-id> addinstance --image=projects/coreos-cloud/global/images/coreos-beta-310-1-0-v20140508 --persistent_boot_disk --zone=us-central1-a --machine_type=n1-standard-1 --metadata_from_file=user-data:cloud-config.yaml core1 core2 core3
To show off fleet’s scheduling abilities, let’s submit and start a very simple Docker container that echoes a message. First, SSH onto one of the machines in the cluster. Remember to replace the project ID with your own:
$ gcutil --project=coreos ssh --ssh_user=core core1
Create a new unit file on disk that runs our container:
$ cat example.service [Unit] Description=MyApp After=docker.service Requires=docker.service [Service] RemainAfterExit=yes ExecStart=/usr/bin/docker run busybox /bin/echo 'I was scheduled with fleet!'
To run this unit on your new cluster, submit it via fleetctl:
$ fleetctl start example.service $ fleetctl list-units UNIT STATE LOAD ACTIVE SUB DESC MACHINE example.service launched loaded active exited MyApp b603fc4d.../10.240.246.57
The status of the registry container can easily be fetched via fleetctl:
$ fleetctl status example.service ● example.service - MyApp Loaded: loaded (/run/fleet/units/example.service; linked-runtime) Active: active (exited) since Thu 2014-05-22 20:27:54 UTC; 4s ago Process: 15789 ExecStart=/usr/bin/docker run busybox /bin/echo I was scheduled with fleet! (code=exited, status=0/SUCCESS) Main PID: 15789 (code=exited, status=0/SUCCESS) May 22 20:27:54 core-01 systemd[1]: Started MyApp. May 22 20:27:57 core-01 docker[15789]: I was scheduled with fleet!
Using this fundamental tooling you can start building full distributed applications on top of CoreOS and Google Compute Engine. Checkout the CoreOS blog for more examples of using fleet, load balancers and more.
For a complete guide on running CoreOS on Google Compute Engine, head over to the docs. To get help or brag about your awesome CoreOS setup, join us on the mailing list or in IRC.