Google Declares War on Boring Data Center Walls

Google Declares War on Boring Data Center Walls

datacenterknowledgelogoBrought to you by Data Center Knowledge

Usually, if you drive by a data center, there is little indication that the huge gray building you are passing houses one of the engines of the digital economy. Sometimes, if you happen to be a data center geek, you may deduce the facility’s purpose from observing a fleet of massive cooling units along one of its walls, but even those often hide from plain sight.

Data centers by and large are non-descript, and many, if not most, in the industry like to keep it that way. After all, the fewer people know where a facility that’s critical to a nation’s economy (a stock exchange data center), or one that’s critical to a nation’s security (a mission-critical US Navy data center) is located, the better.

But Google has decided to flaunt the huge server farms it has built around the world. From images and videos the company has released in the past, the insides of these facilities are works of art. Here’s a 360-degree tour inside one of them:

[embedded content]

Now, the company wants their external walls to both reflect their function in society and be a pleasure to look at.

“Because these buildings typically aren’t much to look at, people usually don’t—and rarely learn about the incredible structures and people inside who make so much of modern life possible,” Joe Kava, VP of Google Data Centers, wrote in ablog post.

In what it dubbed the “Data Center Mural Project,” Google has hired four artists to paint murals on the walls of four of its data centers: in Mayes County, Oklahoma; St. Ghislain, Belgium; Dublin, Ireland; and Council Bluffs, Iowa.

The artists were tasked with portraying each building’s function and reflecting the community it’s in.

[embedded content]

The murals in Oklahoma and Belgium have been completed, and the remaining two are in progress.

Jenny Odell, the artist who worked on the Mayes County project, used Google Maps imagery to create large collages, each reflecting a different type of infrastructure in use today(Photo: Google):

google oklahoma mural

Oli-B, who painted the mural on a wall of Google’s St. Ghislain data center, created an abstract interpretation of “the cloud.” He used elements specific to the surrounding community, as well as the data center site and the people who work there (Photo: Google):

google st gislain mural closeup

The four sites are just the start. The company hopes to expand the Data Center Mural Project to more locations.

More images and video on the Data Center Murals Project website.

Original article appeared here: Google Declares War on Boring Data Center Walls

Source: TheWHIR

RightScale Cuts Own Cloud Costs by Switching to Docker

RightScale Cuts Own Cloud Costs by Switching to Docker

datacenterknowledgelogoBrought to you by Data Center Knowledge

Less than two months ago, the engineering team behind the cloud management platform RightScale kicked off a project to rethink the entire infrastructure its services run on. They decided to package as much of its backend as possible in Docker containers, the method of deploying software whose popularity spiked over the last couple of years, becoming one of the most talked about technology shifts in IT.

It took the team seven weeks to complete most of the project, and Tim Miller, RightScale’s VP of engineering, declared the project a success in a blog post Tuesday, saying they achieved both goals they had set out to achieve: reduced cost and accelerated development.

There are two Dockers. There is the Docker container, which is a standard, open source way to package a piece of software in a filesystem with everything that piece of software needs to run: code, runtime, system tools, system libraries, etc. There is also Docker Inc., the company that created the open source technology and that has built a substantial set of tools for developers and IT teams to build, test, and deploy applications using Docker containers.

In the sense that a container can contain an application that can be moved from one host to another, Docker containers are similar to VMs. Docker argues that they are a more efficient, lighter-weight way to package software than VMs, since each VM has its own OS instance, while Docker runs on top of a single OS, and countless individual containers can be spun up in that single environment.

Another big advantage of containers is portability. Because containers are standardized and contain everything the application needs to run, they can reportedly be easily moved from server to server, VM to VM (they can and do run in VMs), cloud to cloud, server to laptop, etc.

Google uses a technology similar to Docker containers to power its services, and many of the world’s largest enterprises have been evaluating and adopting containers since Docker came on the scene about two years ago.

Read more: Docker CEO: Docker’s Impact on Data Center Industry Will Be Huge

RightScale offers a Software-as-a-Service application that helps users manage their cloud resources. It supports all major cloud providers, including Amazon, Microsoft, Google, Rackspace, and IBM SoftLayer, and key private cloud platforms, such as VMware vSphere, OpenStack, and Apache CloudStack.

Its entire platform consists of 52 services that used to run on 1,028 cloud instances. Over the past seven weeks, the engineering team containerized 48 of those services in an initiative they dubbed “Project Sherpa.”

They only migrated 670 cloud instances to Docker containers. That’s how many instances ran dynamic apps. Static apps – things like SQL databases, Cassandra rings, MogoDB clusters Redis, Memcached, etc. – wouldn’t benefit much from switching to containers, Miller wrote.

The instances running static apps now support containers running dynamic apps in a hybrid environment. “We believe that this will be a common model for many companies that are using Docker because some components (such as storage systems) may not always benefit from containerization and may even incur a performance or maintenance penalty​ if containerized,” he wrote.

As a result the number of cloud instances running dynamic apps was reduced by 55 percent and the cloud infrastructure costs of running those apps came down by 53 percent on average.

RightScale has also already noticed an improvement in development speed. Standardization and portability containers offer help developers with debugging, working on applications they have no experience with, and flexibility in accessing integration systems. Product managers can check out features that are being developed without getting developers involved.

“There are certainly more improvements that we will make in our use of Docker, but we would definitely consider Project Sherpa a success based on the early results, Miller wrote.

Original article appeared here: RightScale Cuts Own Cloud Costs by Switching to Docker

Source: TheWHIR

Microsoft to Launch Cloud Data Centers in Korea

Microsoft to Launch Cloud Data Centers in Korea

datacenterknowledgelogo

Brought to you by Data Center Knowledge

Microsoft announced plans to build cloud data centers in South Korea as the race to expand global reach of their cloud infrastructure among the largest public cloud providers, including Amazon and Google, continues.

Microsoft Azure continues to lead in terms of the number of physical locations its customers can choose to host their virtual infrastructure in. Twenty-four Azure regions are available today, and, including the upcoming Korea regions, the company has announced eight more that are underway.

While ahead of the competition in global reach, in Korea Microsoft is catching up to Amazon, whichlaunched a Seoul cloud region in January. Today, there are nine Azure regions in Asia, including China, Hong Kong, Singapore, India, and Japan, and two in Australia.

Google, which has fewer dedicated cloud data center locations than both of its largest competitors in the space, has only one in Asia, a three-zone region in Taiwan.

After a brief slowdown in data center spending in 2015, the big three cloud providers have ramped up cloud data center construction this year.

Microsoft reported a 65-percent increase in data center spend year over year in the first quarter. Google said in March it would add 12 new data center locations to expand its cloud infrastructure. Amazon increased capital spending by 35 percent in the first quarter and attributed a big portion of the increase to investment in AWS.

The new Azure data center region will be located in Seoul, Takeshi Numoto, Microsoft’s corporate vice president for cloud and enterprise, wrote in a blog post announcing the plans.

He also announced that new data centers hosting Azure and 365 have come online in Toronto and Quebec City,Microsoft’s first cloud data centers in Canada.

Original article appeared here: Microsoft to Launch Cloud Data Centers in Korea

Source: TheWHIR

Dutch Data Center Group Says Draft Privacy Shield Weak

Dutch Data Center Group Says Draft Privacy Shield Weak

datacenterknowledgelogoBrought to you by Data Center Knowledge

An alliance of data center providers and data center equipment vendors in Holland, whose members include some of the world’s biggest data center companies, has come out against the current draft of Privacy Shield, the set of rules proposed by the European Commission as replacement for Safe Harbor, a legal framework that governed data transfer between the US and Europe before its annulment by the EC last year.

The Dutch Datacenter Association issued a statement Monday saying Privacy Shield “currently offers none of the improvements necessary to better safeguard the privacy of European citizens.”

The list of nearly 30 association participants includes Equinix and Digital Realty, two of the world’s largest data center providers, as well as European data center sector heavyweights Colt, based in London, and Interxion, a Dutch company headquartered just outside of Amsterdam.

In issuing the statement, the association sided with the Article 29 Working Party, a regulatory group that consists of data protection officials from all EU member states. Article 29 doesn’t create or enforce laws, but data regulators in EU countries base their laws on its opinions, according to the Guardian.

Related: Safe Harbor Ruling Leaves Data Center Operators in Ambiguity

In April, the Working Party said it had “considerable reservations about certain provisions” in the draft Privacy Shield agreement. One of the reservations was that the proposed rules did not provide for adequate privacy protections for European data. Another was that Privacy Shield wouldn’t fully protect Europeans from mass surveillance by US secret services, such as the kind of surveillance the US National Security Administration has been conducting according to documents leaked by the former NSA contractor Edward Snowden.

Amsterdam is one of the world’s biggest and most vibrant data center and network connectivity markets. Additionally, there are several smaller but active local data center markets in the Netherlands, such as Eindhoven, Groningen, and Rotterdam.

There are about 200 multi-tenant data centers in the country, according to a 2015 report by the Dutch Datacenter Association. Together, they house about 250,000 square meters of data center space.

The association has support from a US partner, called the Internet Infrastructure Coalition, which it referred to as its “sister organization.” David Snead, president of the I2Coalition, said his organization understood the concerns raised by Article 29.

“We believe that many of the concerns raised by the Working Party can be resolved with further discussions,” he said in a statement.

Original article appeared here: Dutch Data Center Group Says Draft Privacy Shield Weak

Source: TheWHIR

Yahoo Data Center Team Staying "Heads-Down" Amid Business Turmoil

Yahoo Data Center Team Staying "Heads-Down" Amid Business Turmoil

datacenterknowledgelogoBrought to you by Data Center Knowledge

While the online businesses they support face an uncertain future, members of the Yahoo data center operations team are keeping busy, continuing to make sure Yahoo’s products are reaching the screens of their users.

“The data center operations team has a lot of work to do,” Mike Coleman, senior director of global data center operations at Yahoo, said in a phone interview. “The potential review of strategic alternatives … is being explored, but my operations team is heads-down, focused on powering our products and services on behalf of our users.”

The struggling company has been soliciting bids for its assets, including its core online business, since earlier this year. Verizon Communications is among interested suitors, according to reports.

Yahoo is exploring the sale of virtually all of its assets, which include company-owned data centers in Lockport, New York; Quincy, Washington; La Vista, Nebraska; and Singapore. It also leases data center capacity in a number of locations.

So far, however, the “review of strategic alternatives” hasn’t had any effect on the Yahoo data center operations team. “No effect for us at all,” Coleman said.

The round of layoffs the company announced in February did affect some Yahoo employees on the Lockport campus, according to reports, but that campus includes both data centers and office space.

In fact, Coleman’s team recently brought online the latest data center on the La Vista campus – a $20 million expansion announced last November. The expansion space was launched in April, Coleman said.

He declined to disclose how much capacity the project added, but it involved installing 6MW of backup generator capacity. This doesn’t mean the team added 6MW of data center capacity, he pointed out.

The expansion was completed relatively quickly, which Coleman attributed in part to a new online environmental quality permitting process the State of Nebraska launched last year. Yahoo’s air quality permit for the 6MW of diesel generator capacity was the first issued after the new process was instated, a Yahoo spokesperson said in an emailed statement.

Nebraska Governor Pete Ricketts held a press conference on the new process at the Yahoo data center in La Vista Wednesday.

The state switched from a physical process of applying on paper to an online one, which has shrunk the process from months to days, Coleman said. Nebraska officials have touted the change, which applies to air quality and storm water permits, as a step to make the state more business-friendly, meant to help the construction industry cut through the administrative red tape.

Original article appeared here: Yahoo Data Center Team Staying “Heads-Down” Amid Business Turmoil

Source: TheWHIR

Green Grid Seeking Clarity Following ASHRAE PUE Agitation

Green Grid Seeking Clarity Following ASHRAE PUE Agitation

datacenterknowledgelogoBrought to you by Data Center Knowledge

No, PUE is not dead. It’s alive and well, and the fact that an ASHRAE committee backed away from using a version of PUE in the new data center efficiency standard that’s currently in the works hasn’t changed that.

The Green Grid Association, the data center industry group that championed the most widely used data center energy efficiency metric, has found itself once again in the position of having to defend the metric’s viability after the ASHRAE committee struck PUE from an earlier draft of the standard.

Roger Tipley, Green Grid president, said an important distinction has to be taken in to account. The type of PUE ASHRAE initially proposed, Design PUE, is not the PUE Green Grid has been championing. “This Design PUE concept is not a Green Grid thing,” he said.

Green Grid’s PUE is for measuring infrastructure efficiency of operational data centers over periods of time. Design PUE is for evaluating efficiency of the design of a new data center or an expansion.

New Data Center Standard

ASHRAE Standard 90.4 is being developed specifically for data centers and telecommunications buildings. The standard that’s in place today, 90.1, covers buildings of almost every type – the only exception is low-rise residential buildings – and 90.4 is being developed in recognition that data centers and telco buildings have certain design elements that are unique and require special treatment.

ASHRAE’s efficiency standards are important because local building officials use them extensively in inspections and permitting, and non-compliance on a building owner’s part can be costly.

During the course of a standard’s development, the responsible ASHRAE committee puts out multiple drafts and collects comments from industry experts. Every draft is made public and open for comment for a limited period of time, and the draft that follows takes the feedback that has come in into consideration.

The latest draft of ASHRAE Standard 90.4 was released for comment on April 29, and the comment period will close on May 29. To comment or learn more, visit www.ashrae.org/publicreviews.

Green Grid Not Opposed to Design PUE

While Green Grid had little to do with Design PUE, the organization is not opposed to it, Tipley said. “It makes certain sense for the design community to have some [energy efficiency] targets to go for.”

The 90.4 committee struck Design PUE from the initial draft after some prominent data center industry voices spoke out against its inclusion in the standard. The argument against it was that it would put colocation providers at a disadvantage.

PUE compares the amount of power used by IT equipment to the total amount of power the data center consumes. PUE gets lower (which means better) as the portion of total power that goes to IT gets higher. More often than not, colo providers launch new data centers at very low utilization rates.

They have to keep an inventory of available capacity to serve new or expanding customers, which means they theoretically cannot get close to ideal PUE just because of the nature of their business.

Different Metrics, Similar Goals

The committee has replaced Design PUE with more granular metrics that in some ways resemble PUE but focus separately on electrical infrastructure efficiency (Electrical Loss Component, or ELC) and on mechanical infrastructure efficiency (Mechanical Load Component, or MLC). They have also proposed a third metric that combines the two.

The committee’s new approach to measuring efficiency is somewhat similar to Green Grid’s Data Center Maturity Model, which also takes into consideration contributions of individual infrastructure components to the facility’s overall PUE, Tipley pointed out.

In fact, Green Grid is planning to evaluate ELC and MLC for inclusion in the second version of the maturity model, which is targeted for release in 2017, he said.

There is value to such levels of granularity, and at the end of the day, ASHRAE’s metrics have the same goal as Green Grid’s: higher data center efficiency. “The end result is they’re trying to get to a low PUE,” Tipley said.

The comment period for the latest draft of ASHRAE Standard 90.4, Energy Standard for Data Centers, ends on May 29. To review the draft and to comment, visit www.ashrae.org/publicreviews.

Original report appeared here: Green Grid Seeking Clarity Following ASHRAE PUE Agitation

Source: TheWHIR

Cold Storage Comes to Microsoft Cloud

Cold Storage Comes to Microsoft Cloud

datacenterknowledgelogoBrought to you by Data Center Knowledge

Microsoft has launched a cold storage service on its Azure cloud, offering a low-cost storage alternative for data that’s not accessed frequently.

The launch is a catch-up move by Microsoft, whose biggest public cloud competitors have had cold-storage options for some time. Amazon launched its Glacier service in 2012, and Google rolled out its Cloud Storage Nearline option last year.

The basic concept behind cold storage is that a lot of data people and companies generate is accessed infrequently, so it doesn’t require the same level of availability and access speed as critical applications do. Therefore, the data center infrastructure built to store it can be cheaper than primary cloud infrastructure, with the cost savings passed down to the customer in the case of a cloud provider.

Microsoft’s new service is called Cool Blob Storage, and it costs from $0.01 to $0.048 per GB per month, depending on the region and the total volume of data stored. The range for the “Hot” Blob storage tier is $0.0223 to $0.061 per GB, so some customers will be able to cut the cost of storing some of their data in Microsoft’s cloud by more than 50 percent of the opt for the “Cool” access tier.

Web-scale data center operators of Microsoft’s caliber have looked at reducing their infrastructure costs by better aligning infrastructure investment with the type of data being stored for some time now. Facebook has revealed more details than others about the way it approaches cold storage, including open sourcing some of its cold storage hardware designs through the Open Compute Project.

Related: Visual Guide to Facebook’s Open Source Data Center Hardware

The social network has designed and built separate data centers next to its core sever farms in Oregon and North Carolina specifically for this purpose. The storage systems and the facilities themselves are optimized for cold storage and don’t have redundant electrical infrastructure or backup generators. The design has resulted in significant energy and equipment cost savings, according to Facebook’s infrastructure team.

Read more: Cold Storage: the Facebook Data Centers that Back Up the Backup

Related: Google Says Cold Storage Doesn’t Have to Be Cold All the Time

Microsoft hasn’t shared details about the infrastructure behind its new cold storage service. In 2014, however, it published a paper describing a basic building block for an exascale cold storage system called Pelican.

Pelican is a rack-scale storage unit designed specifically for cold storage in the cloud, according to Microsoft. It is a “converged design,” meaning everything, from mechanical systems to hardware and software, was designed to work together.

Pelican’s peak sustainable read rate was 1GB per second per 1PB of storage when the paper came out, and it could store more than 5PB in a single rack, which meant an entire rack’s data could be transferred out in 13 days. Microsoft may have a newer-generation cold storage design with higher throughput and capacity today.

Cool Blob Storage and the regular-access Hot Blob Storage have similar performance in terms of latency and throughput, Sriprasad Bhat, senior program manager for Azure Storage, wrote in a blog post recently announcing the launch.

There is a difference in availability guarantees between the two, however. The Cool access tier offers 99 percent availability, while the Hot access tier guarantees 99.9 percent.

With RA-GRS redundancy option, which replicates data for higher availability, Microsoft will give you a 99.9 percent uptime SLA for Cold access versus 99.99 percent for the Hot access tier.

Original article appeared here: Cold Storage Comes to Microsoft Cloud

Source: TheWHIR