Why Salesforce Bought Coolan, a Data Center Optimization Startup

Why Salesforce Bought Coolan, a Data Center Optimization Startup

datacenterknowledgelogoBrought to you by Data Center Knowledge

Salesforce made a surprising move Thursday, acquiring Coolan, a three-year-old Silicon Valley startup whose software uses Big Data analytics and machine learning to help companies make smarter data center management and hardware buying decisions.

In a blog post, Coolan’s co-founder, Amir Michael, who used to design servers for Google and later for Facebook, and who co-founded Facebook’s open source data center and hardware design initiative, the Open Compute Project, said Coolan would work to optimize Salesforce’s infrastructure.

So far, the deal appears to be mostly about Salesforce looking to improve the way it builds and manages its own data centers. The company’s core business is selling cloud-based business software tools, and it’s unlikely – although not impossible – that it will sell data center management services based on Coolan’s platform to others.

“Once the transaction has closed, the Coolan team will help Salesforce optimize its infrastructure as it scales to support customer growth around the world,” Michael wrote. “I will continue my work with the Open Compute Project to further its mission of making hardware open, efficient, and scalable.”

If it wasn’t clear already, the acquisition confirms once more that Salesforce’s announcement earlier this year that it would use Amazon Web Services to deploy its core products to select international markets did not mean Mark Benioff’s cloud software giant was thinking of getting rid of its own data centers, which it leases from data center providers.

Neither company has shared much detail about Salesforce’s plans for Coolan beyond Michael’s blog post. Reached by phone Thursday, Michael said he could not talk about the deal and was instructed by Salesforce to direct all inquiries to them, while a Salesforce spokesperson, responding to a request for comment, referred us back to his blog post.

Salesforce Rethinking Data Centers

Salesforce has recently been revamping its approach to data center infrastructure, seeking to adopt a strategy similar to that used by the likes of Google, Facebook, and several others. Their strategies rely among other things on custom, stripped down hardware, little variation between hardware SKUs that support different services, and lots of automation.

Both Coolan’s technology and its team, some of whom were deeply involved in building and running infrastructure for those web-scale data center operators, will be useful to Salesforce’s current infrastructure efforts.

Read more: Salesforce Latest Convert to the Web-Scale Data Center Way

Coolan’s Platform Lowers Data Center Costs

Salesforce likely sees Coolan’s software platform as a competitive advantage. The platform, which the startup has been providing to customers as a cloud-based service, helps companies save a lot of money in their data centers.

In one recent project for a customer, Coolan identified that power supplies in the customer’s servers were grossly overprovisioned, resulting in 300,000 kWh of data center energy waste per year. This customer, whose name Coolan did not reveal, had 1,600 servers. A company like Salesforce, which has global data center infrastructure that continues to scale, can get a lot more savings out of such improvements.

Read more: How Server Power Supplies are Wasting Your Money

Another example the platform’s application is identifying the best time to replace a server. There isn’t a magic number that works for every company, and total cost of ownership changes differently over time for different businesses. Being able to pinpoint when a multitude of factors – things like server cost, data center CAPEX and OPEX, cost of data center infrastructure, networking equipment, and cost data center racks – all line up in a way that makes keeping an old server more expensive than replacing it with a new one is the kind of thing Coolan is good at.

Read more: When is the Best Time to Retire a Server

Machine Learning in Data Center Management

To arrive at its conclusions, the platform analyzes operational data from the present customer’s data centers as well as historical operational data it has collected from past customers’ facilities. It stores all the data it collects on Amazon’s cloud, where much of its computing also takes place, Michael told Data Center Knowledge in a recent interview.

Coolan uses machine learning to help with everything, from identifying inefficiencies to predicting failure in server components, he said.

By applying machine learning to data center management, Coolan is taking a page out of Google’s playbook, although it’s unclear whether there are any similarities at all in the ways the two companies apply it.

Google has been using machine learning to optimize its data centers for some time now. Its latest effort to apply its Artificial Intelligence technology called DeepMind to improving data center energy efficiency has reportedlyresulted in a 15 percent improvement in Power Usage Effectiveness (PUE).

Scaling Smart

Acquiring Coolan, Salesforce gets its hands on some sophisticated, cutting-edge data center management and optimization capabilities and a team of experts who are likely to have a lot of influence on the way the world’s biggest cloud CRM company builds out its infrastructure going forward.

Scale is crucial for today’s cloud providers, and scaling infrastructure in a smart way is everything, affecting both the company’s ability to serve its customers with high performance and minimal downtime and its ability to make a profit.

Source: TheWHIR

Equinix Data Center Outage in London Blamed on Faulty UPS

Equinix Data Center Outage in London Blamed on Faulty UPS

datacenterknowledgelogoBrought to you by Data Center Knowledge

Wednesday’s data center outage at one of the Telecity facilities in London Equinix took over in its recent acquisition of the European service provider was caused by a problem with a UPS system, Equinix told its customers via email, according to news reports. Studies show that UPS failure is the most common cause of data center outages.

The company did not say what exactly went wrong with the UPS, but the outage caused connectivity problems for many subscribers to internet services by BT, whose spokeswoman told the Register that about one in every 10 attempts to reach a website by its users failed during the outage.

The data center outage affected a portion of BT subscribers in England, Wales, Scotland, and Northern Ireland, according to the review of affected areas posted on BT’s status page by the BBC.

Equinix issued a press statement by Russell Poole, its managing director for the UK, confirming the outage at the former Telecity LD8 data center. “This impacted a limited number of customers, however service was restored within minutes,” he said.

A spokesman for the London Internet Exchange (LINX) told the BBC that the outage lasted from 7:55 am to 8:17 am BST.

The Telecity LD8 data center, now called 8/9 Harbour Exchange, is one of five data centers that make up the Telecity campus in the London Docklands that was the crown jewel in the data center provider’s portfolio acquired by Equinix for $3.6 billion in a deal that closed earlier this year. The campus hosts a substantial portion of the LINX infrastructure, as well as many financial services firms, cloud providers, and companies in other business verticals.

A data center outage impacting a user like LINX can have effects that reach wider than even an outage that impacts a major internet service provider like BT. Internet exchanges are where many network operators and internet content providers interconnect their networks to more effectively deliver traffic to their end users.

BT is one of 700 LINX members. The LINX spokesman, however, pointed out that there are usually redundant network routes that ensure traffic continues to flow when there is an outage on one of them.

“Over 80% of our traffic continued to flow and it immediately started to recover even before the power was restored,” he said.

UPS failure has for years been the most frequently cited cause of data center outages, according to studies by Emerson Network Power and the Ponemon Institute. Last year, UPS and UPS battery failures caused 25 percent of outages – up from 24 percent in 2013 but down from 29 percent in 2010, according to their most recent study, released earlier this year.

Source: TheWHIR

Google Launches Its First Cloud Data Center on West Coast

Google Launches Its First Cloud Data Center on West Coast

datacenterknowledgelogoBrought to you by Data Center Knowledge

Google has brought online its first West Coast cloud data center, promising US and Canadian cloud users on or close to the coast a 30 to 80 percent reduction in latency if they use the new region instead of the one in central US, which was closest to them before the new region launched.

This data center in Oregon isn’t the first Google data center on the West Coast. The company has had a data center campus in the Dalles, Oregon, for a decade. The launch means this is the first time Google’s cloud services are served out of Oregon in addition to other Google services, such as search or maps.

With the new cloud data center online, the company said its cloud users in cities like Vancouver, Seattle, Portland, San Francisco, and Los Angeles should expect to see big performance improvements if they choose to host their virtual infrastructure in the new region, called us-west1.

See also: What Cloud and AI Do and Don’t Mean for Google’s Data Center Strategy

The launch is part of an effort Google kicked off recently to expand its global cloud data center infrastructure as it competes with cloud giants like Amazon Web Services and Microsoft Azure, both of whom are far ahead in the amount of cloud availability regions. The company said in March it would add 10 data center locations to support its cloud services by both leasing data center space and building its own facilities.

One of the planned new cloud data center locations on the list will be in Japan, the company has disclosed.

See also: Nadella: We’ll Build Cloud Data Centers Wherever Demand Takes Us

The Oregon cloud region has been launched with three initial services: Compute Engine, Cloud Storage, and Container Engine, the company said in a blog post announcing the launch Wednesday. The region includes two Compute Engine zones for high-availability applications, which usually means there are two separate data halls, each with its own independent infrastructure.

“A zone usually has power, cooling, networking, and control planes that are isolated from other zones, and most single failure events will affect only a single zone,” Google says on its cloud platform website.

Oregon is Google cloud’s fifth availability region. The other ones are Central US, Eastern US, Western Europe, and Eastern Asia.

A lot more detail about Google’s cloud data center strategy in this presentation by Joe Kava, the man in charge of the company’s data center operations:

[embedded content]

Source: TheWHIR

LinkedIn Pushes Own Data Center Hardware Standard

LinkedIn Pushes Own Data Center Hardware Standard

datacenterknowledgelogo

Brought to you by Data Center Knowledge

LinkedIn, the social network for the professional world that was in June acquired by Microsoft, has announced a new open design standard for data center servers and racks it hopes will gain wide industry adoption.

It’s unclear, however, how the initiative fits with the infrastructure strategy of its new parent company, which has gone all-in with Facebook’s Open Compute Project, an open source data center and hardware design initiative with its own open design standards for the same components. When it joined OCP two years ago Microsoft also adopted a data center strategy that would standardize hardware on its own OCP-inspired designs across its global operations.

Yuval Bachar, who leads LinkedIn’s infrastructure architecture and who unveiled the Open19 initiative in a blog postTuesday, told us earlier this year that the company had decided against using OCP hardware when it was switching to a hyperscale approach to data center deployment because OCP hardware wasn’t designed for standard data centers and data center racks. That, however, was in March, before LinkedIn was gobbled up by the Redmond, Washington-based tech giant.

“Our plan is to build a standard that works in any EIA 19-inch rack in order to allow many more suppliers to produce servers that will interoperate and be interchangeable in any rack environment,” Bachar wrote in the blog post.

See also: LinkedIn Data Centers Adopting the Hyperscale Way

The standard OCP servers are 21 inches wide, and so are the standard OCP racks. Facebook switched to 21 inches in its data centers several years ago, and announced its 21-inch rack design, called Open Rack, in 2012. Multiple vendors, however, have designed OCP servers in the traditional 19-inch form factor and racks that accommodate them.

There is more to LinkedIn’s proposed Open19 standard than rack width, however. Here is the full list of Open19 specifications:

  • Standard 19-inch 4 post rack
  • Brick cage
  • Brick (B), Double Brick (DB), Double High Brick (DHB)
  • Power shelf—12 volt distribution, OTS power modules
  • Optional Battery Backup Unit (BBU)
  • Optional Networking switch (ToR)
  • Snap-on power cables/PCB—200-250 watts per brick
  • Snap-on data cables—up to 100G per brick
  • Provides linear growth on power and bandwidth based on brick size

linkedin open19 rack

Illustration of LinkedIn’s proposed Open19 rack and server design (Image: LinkedIn)

Bachar and his colleagues believe designs that follow these specs “will be more modular, efficient to install, and contain components that are easier to source than other custom open server solutions.”

Making open hardware easier to source is an important issue and probably the strongest argument for an alternative standard to OCP. We have heard from multiple people close to OCP that sourcing components for OCP gear is difficult, especially if you’re not a high-volume buyer like Facebook or Microsoft. OCP vendors today are focused predominantly on serving those hyperscale data center operators, which substantially limits access to that kind of hardware for smaller IT shops.

Read more: Why OCP Servers are Hard to Get for Enterprise IT Shops

Still, the amount of industry support OCP has gained over the last several years will make it difficult for a competing standard to take hold, especially given that one of OCP’s biggest supporters is now LinkedIn’s parent company. Other OCP members include Apple, Google, AT&T, Deutsche Telekom, and Equinix, as well as numerous large financial institutions and the biggest hardware and data center infrastructure vendors.

Source: TheWHIR

Data Center Customers Want More Clean Energy Options

Data Center Customers Want More Clean Energy Options

datacenterknowledgelogoBrought to you by Data Center Knowledge

Today, renewable energy as core part of a company’s data center strategy makes more sense than ever, and not only because it looks good as part of a corporate sustainability strategy. The price of renewable energy has come down enough over the last several years to be competitive with energy generated by burning coal or natural gas, but there’s another business advantage to the way most large-scale renewable energy purchase deals are structured today.

Called Power Purchase Agreements, they secure a fixed energy price for the buyer over long periods of time, often decades, giving the buyer an effective way to hedge against energy-market volatility. A 20-year PPA with a big wind-farm developer insures against sticker shock at the pump for a long time, which for any major data center operator, for whom energy is one of the biggest operating costs, is a valuable proposition.

SEE ALSO: Report: US No Longer Lowest-Risk Country for Data Centers

Internet and cloud services giants, who operate some of the world’s largest data centers, are privy to this, and so is the Pentagon. The US military issecond only to Google in the amount of renewable energy generation capacity it has secured through long-term PPAs, according to a recent Bloomberg report.

Data Center Providers Enter Clean-Energy Space

Google’s giant peers, including Amazon Web Services, Facebook, Microsoft, and Apple, are also on Bloomberg’s list of 20 institutions that consume the most renewable energy through such agreements, and so are numerous US corporations in different lines of business, such as Wal-Mart Stores, Dow Chemical, and Target. There are two names on the list, however, that wouldn’t have ended up on it had Bloomberg complied it before last year: Equinix and Switch SuperNAP.

Both are data center service providers, companies that provide data center space and power to other companies, including probably all of the other organizations on the list, as a service. The main reason companies like Equinix and Switch wouldn’t make the list in 2014 is that there wasn’t a strong-enough business case for them to invest in renewable energy for their data centers. There was little interest from customers in data center services powered by renewable energy.

While still true to a large extent, this is changing. Some of the biggest and most coveted users of data center services are more interested than ever in powering as much of their infrastructure with renewable energy as possible, and being able to offer this service will continue growing in importance as a competitive strategy for data center providers.

Just last week, Digital Realty Trust, also one of the world’s largest data center providers, announced it had secured a wind power purchase agreement that would cover the energy consumption of all of its colocation data centers in the US.

More Interest from Data Center Customers

According to a recent survey of consumers of retail colocation and wholesale data center services by Data Center Knowledge, 70 percent of these users consider sustainability issues when selecting data center providers. About one-third of the ones that do said it was very important that their data center providers power their facilities with renewable energy, and 15 percent said it was critical.

Survey respondents are about equally split between wholesale data center and retail colocation users from companies of various sizes in a variety of industry verticals, with data center requirements ranging from less than 10kW to over 100MW. More than half are directly involved in data center selection within their organizations.

Most respondents (70 percent) said their interest in data centers powered by renewable energy would increase over the next five years. More than 60 percent have an official sustainability policy, while 25 percent are considering developing one within the next 18 months.

Download results of the Data Center Knowledge survey in full: Renewable Energy and Data Center Services in 2016

While competitive with fossil-fuel-based energy, renewable energy still often comes at a premium. The industry isn’t yet at the level of sophistication where a customer can choose between data center services powered by renewables as an option – and pay accordingly – or regular grid energy that’s theoretically cheaper. Even utilities, save for a few exceptions, don’t have a separate rate structure for renewable energy.

The options for bringing renewable energy directly to data centers today are extremely limited. Like internet giants, Equinix and Switch have committed to buying an amount of renewable energy that’s equivalent to the amount of regular grid energy their data centers in North America consume, but it doesn’t mean all that energy will go directly to their facilities. This is an effective way to bring more renewable generation capacity online, but it does little to reduce data center reliance on whatever fuel mix supplies the grids the facilities are on for both existing and future demand.

If, however, more utilities started offering renewable energy as a separate product, with its own rate – as Duke Energy has done in North Carolina after being lobbied by Google – data center providers would be able to offer the option to their customers, and it would probably be a popular option, even if it meant paying a premium. According to our survey, close to one-quarter of data center customers would “probably” be willing to pay a premium for such a service. Eight percent said they would “definitely” be willing to do so, and 37 percent said “possibly.”

At no additional cost, however, 40 percent said they would “definitely” be more interested in using data center services powered by renewable energy.

As the survey shows, interest in renewable energy among users of third-party data center services is on the rise, and if more utilities and data center providers can find effective ways to offer clean energy to their end users, they will find that there is not only an appetite for it in the market, but also that the appetite is growing.

Download results of the Data Center Knowledge survey in full: Renewable Energy and Data Center Services in 2016

Source: TheWHIR

OpenStack Fuels Surge of Regional Rivals to Top Cloud Providers

OpenStack Fuels Surge of Regional Rivals to Top Cloud Providers

datacenterknowledgelogoBrought to you by Data Center Knowledge

As the handful of top cloud providers expand around the world, battling it out in as many markets as they can get to, they are also increasingly competing with smaller regional players in addition to each other. One of the biggest reasons for this surge in regional cloud players is OpenStack.

The family of open source cloud infrastructure software has lowered the barrier for entry into the cloud service provider market. Combined with the rise of local regulatory and data sovereignty concerns and demand for alternatives to the top cloud providers, OpenStack has fueled an emergence of numerous regional cloud providers around the world over the last two years, according to the market research firm IDC.

Most of these regional players are using OpenStack, IDC said in a statement this week. The analysts expect growth in regional cloud providers to continue.

The announcement focuses on one major sector of the cloud market: Infrastructure-as-a-Service. Amazon Web Services continues to dominate it, “followed by a long tail of much smaller service providers.”

The firm forecasts the size of the global IaaS market to more than triple between 2015 and 2020, going from $12.6 billion last year to $43.6 billion four years from now.

This growth is poised to ensure continued growth in demand for data center capacity around the world, as both top cloud providers and smaller regional players expand their infrastructure to support more and more users.

Read more: How Long Will the Cloud Data Center Land Grab Last?

Unlike the early years of cloud, when the majority of the growth was driven by born-on-the-web startups and individual developers, the next phase of growth will be fueled to a large extent by enterprises.

Almost two-thirds of respondents to a recent IDC survey of more than 6,000 IT organizations said they were already using public cloud IaaS or were planning to start using it by the end of this year.

Enterprises are increasingly looking to public cloud services to help them make their businesses more agile, Deepak Mohan, a research director at IDC, said in a statement:

“This is bringing about a shift in IT infrastructure spending, with implications for the incumbent leaders in enterprise infrastructure technologies. Growth of public cloud IaaS has also created new service opportunities around adoption and usage of public cloud resources. With changes at the infrastructure, architectural, and operational layers, public cloud IaaS is slowly transforming the enterprise IT value chain.”

See also: Top Cloud Providers Made $11B on IaaS in 2015, but It’s Only the Beginning

Source: TheWHIR

Can Pied Piper Really Afford HPE's Composable Infrastructure?

Can Pied Piper Really Afford HPE's Composable Infrastructure?

datacenterknowledgelogoBrought to you by Data Center Knowledge

With Starbucks and Apple logos so common in movies and TV shows that they’re practically unnoticeable, product placement for enterprise technology is the hot marketing challenge of the day.

As we ROFLed watching the season-three finale of Mike Judge’s Silicon Valley, it was hard not to notice the gigantic black rack bearing a green rectangle sitting in the cluttered garage of the Pied Piper/Bachmanity headquarters that doubles as the startup’s data center and triples as Jared’s bedroom.

HBO’s brilliant satirical take on the San Francisco Bay Area tech scene is where converged infrastructure vendors have found their perfect place for product placement.

But compared to the subtle appearances of SimpliVity’s OmniCube on the show – that’s what the much dreaded “box” Pied Piper was forced to build by its promptly ousted CEO Jack Barker was based on – the appearance Hewlett Packard Enterprise’s Synergy on the season finale is a rather clunky feat of enterprise product placement.

More on HPE Synergy: HPE Rethinks Enterprise Computing

The show generally gets things right about tech, the business and the technology. As we’ve pointed out before, it has been fairly spot-on on the data center side of things too, so it was puzzling to see HPE’s latest and greatest in data center gear, its composable infrastructure machine, sitting among the more fitting mess of servers, cables, milk crates, and tool shelves Gilfoyle had concocted to support the startup’s IT requirements earlier.

Pied Piper is out of money at this point, and it’s hard to believe it can afford HPE’s latest iteration on converged infrastructure, let alone one whose official shipping date is unclear at the moment. Besides, hasn’t Pied Piper already migrated to the cloud?

You can see the HPE Synergy rack briefly in the beginning of this promo clip for the season finale:

[embedded content]

Source: TheWHIR

Report: US No Longer Lowest-Risk Country for Data Centers

Report: US No Longer Lowest-Risk Country for Data Centers

datacenterknowledgelogoBrought to you by Data Center Knowledge

There are more data centers in the US than anywhere else, and until at least three years ago, building a data center in the US was less risky than building one in any other country. According to recent risk analysis of global data center locations by a real estate services firm, however, that’s no longer the case.

US ranks third in electricity costs, fifth in ease of doing business, 15th in available network bandwidth, and 36th in corporate tax environment. These and six other characteristics add up to US being the 10th least risky data center location today, according to the firm.

The same report, Cushman & Wakefield’s Data Centre Risk Index, put the country at the top of the list just three years ago. Since 2013, US has been overtaken by four Nordic countries, as well as Switzerland, UK, Canada, Singapore, and South Korea.

The index ranks countries based on 10 factors that have a bearing on the level of risk for building and operating data centers. Different factors affect a country’s overall ranking to different degrees. GDP per capita, for example, doesn’t have nearly the weight of the likelihood of natural disasters, and water availability isn’t as strong a factor as political stability, or energy security.

Considering all 10, Iceland is the safest data center location bet you can make, followed by Norway, Switzerland, Finland, and Sweden, filling out the top-five in that order. Canada ranks sixth, followed by Singapore, South Korea, and the UK.

The report looks at 37 countries Cushman considers either major or emerging data center markets. It’s based on a survey of thousands of data center operators around the world.

So, what is it about Iceland that makes it such a safe haven for data centers? According to the report, the country ranks high in availability of renewable energy and water, low risk of natural disasters, political stability, low energy costs, and corporate taxes. It’s also better than many others in terms of connectivity, ease of doing business, and GDP per capita. Iceland ranked 22nd in energy security, its lowest ranking among all categories.

Risk, of course, isn’t the only thing driving data center location decisions. It is one of several variables itself, the other variables being things like proximity to end users and the ability to improve customer experience. While there are data centers in Iceland, there are relatively few of them.

Proximity to users remains a huge consideration, but corporations are also increasingly concerned about political stability, risk of natural disasters, and energy security when weighing data center locations. All three have surpassed traditional considerations like cost and connectivity in priority, according to the report. Collectively, these three factors now account for one-third of the overall decision, “implying a level of emotional sentiment throughout the survey following a number of major incidents over the past few years.”

The latest example of such an “incident” happened just recently, months after the survey was conducted. It’s unclear whether the UK’s current ranking on the index (it’s in the ninth place) would be different had the report taken into account the country’s vote last month to exit the European Union.

In a statement, Cushman’s head of London markets, Digby Flower, said real estate occupiers in London with strategic plans “will move slowly,” following the Brexit referendum, referring to the real estate market in general. There are signs, however, that demand for data centers as a subcategory of the real estate sector is less affected by Brexit than the category as a whole.

Download the full report here

Source: TheWHIR