Data Foundry Expands Managed Services Offering With Cloud Services

Data Foundry Expands Managed Services Offering With Cloud Services

Data Foundry has announced the addition of Cloud Services to its portfolio of managed services. Data Foundry has always prided itself on being a strategic IT partner and offering more than just colocation space. We are pleased to begin providing Dedicated Cloud Storage and CloudTap, a private and secure cloud connection service.

In addition to the new Cloud Services, Data Foundry currently offers a suite of virtual and physical security services, network services, structured cabling and infrastructure installation.

“We continue to aggressively expand our managed services portfolio, and we are excited to provide our customers with more options when it comes to storage and access to cloud services,” says Mark Noonan, executive vice president of sales. “These new services complement our core services and enable our customers to better manage their overall IT strategy.”

Data Foundry’s Dedicated Cloud Storage is an enterprise storage-as-a-service solution with high availability features. It is a private storage solution that exists on virtual storage arrays and consists of dedicated cores and disks. Workloads in each array are completely isolated from one another, and users own their encryption keys. Users can also choose from SSD, SATA or SAS storage, or a combination of these. This provides companies with greater flexibility and reduced capital expenditure, as they would normally have to purchase these storage resources individually. Storage arrays are located in our Texas 1 data center in Austin, TX, and our customers are able to access their arrays via private transport, making it a highly secure and fast option for storage.

Data Foundry’s other new cloud service, CloudTap, allows colocation customers to access cloud storage and cloud services from major providers, such as Azure, AWS and Google Cloud without traversing the public Internet. Our network engineers have designed a solution that enables protected connectivity to all the major cloud providers.

Source: CloudStrategyMag

AWS Direct Connect Service now Available in Equinix Los Angeles

AWS Direct Connect Service now Available in Equinix Los Angeles

Equinix, Inc. has announced the immediate availability of Amazon Web Services (AWS) Direct Connect cloud service in Equinix’s Los Angeles (IBX®) data centers. With AWS Direct Connect, companies can connect their customer-owned and managed infrastructure directly to AWS, establishing a private connection to the cloud that can reduce costs, increase performance, and deliver a more consistent network experience. The Equinix Los Angeles location brings the total number of Equinix metros offering AWS Direct Connect service to twelve, globally, five of those are within North America.

“As one of the early data center partners to offer AWS Direct Connect services, our goal has always been to provide our customers with the ability to realize the full benefits of the cloud — without worrying about application latency or cost issues. By offering access to AWS via the Direct Connect service in Los Angeles, we are providing additional ways for our North American customers to achieve improved performance of their cloud-based applications,” said Greg Adgate, vice president, Equinix.

Cloud adoption continues to rise among both startups and the enterprise.  In fact, recent survey results from 451 Research’s “Voice of the Enterprise: Cloud” program found that 52% of 440 enterprises surveyed indicated that their public cloud spending would increase in the immediate future.* By providing direct access to AWS cloud inside Equinix data centers, Equinix is enabling enterprise CIOs to advance their cloud strategies by seamlessly and safely incorporating public cloud services into their existing architectures.

“Our quarterly survey of cloud adoption and spending shows steadily increasing growth in both enterprise usage and investment in cloud services. Equinix is fostering this trend by enabling direct, low-latency, secure connections to cloud services, like AWS Direct Connect, within its multi-tenant facilities,” said Andrew Reichman, director, Voice of the Enterprise: Cloud.

The Equinix Los Angeles campus includes four Equinix IBX data centers, which are connected via Metro Connect. While AWS Direct Connect service will reside in the LA3 facility, customers can connect to AWS Direct Connect from any one of these IBX data centers through Metro Connect.  Equinix’s Los Angeles data centers are business hubs for more than 250 companies, and offer interconnections to network services from more than 80 service providers.

Equinix Los Angeles data centers are central to the network strategies of digital content and entertainment companies looking to reach their end users quickly. These companies can now leverage the benefits of AWS cloud to create, deliver and measure compelling content and customer experiences in a highly scalable, elastic, secure and cost effective manner utilizing AWS Direct Connect.

With the addition of Los Angeles, Equinix now offers the AWS Direct Connect service in Amsterdam, Dallas, Frankfurt, London, Los Angeles, Osaka, Seattle, Silicon Valley, Singapore, Sydney, Tokyo and Washington, D.C./Northern Virginia. Equinix customers in these metros will be able to lower network costs into and out of AWS and take advantage of reduced AWS Direct Connect data transfer rates.

Source: CloudStrategyMag

New programming language promises a 4X speed boost on big data

New programming language promises a 4X speed boost on big data

Memory management can be challenge enough on traditional data sets, but when big data enters the picture, things can slow way, way down. A new programming language announced by MIT this week aims to remedy that problem, and so far it’s been found to deliver fourfold speed boosts on common algorithms.

The principle of locality is what governs memory management in most computer chips today, meaning that if a program needs a chunk of data stored at some memory location, it’s generally assumed to need the neighboring chunks as well. In big data, however, that’s not always the case. Instead, programs often must act on just a few data items scattered across huge data sets.

Fetching data from main memory is the major performance bottleneck in today’s chips, so having to fetch it more frequently can slow execution considerably.

“It’s as if, every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge,” explained Vladimir Kiriansky, a doctoral student in electrical engineering and computer science at MIT.

With that challenge in mind, Kiriansky and other researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created Milk, a new language that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets.

Essentially, Milk adds a few commands to OpenMP, an API for languages such as C and Fortran that makes it easier to write code for multicore processors. Using it, the programmer inserts a few additional lines of code around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk’s compiler then figures out how to manage memory accordingly.

With a program written in Milk, when a core discovers that it needs a piece of data, it doesn’t request it — and the attendant adjacent data — from main memory. Instead, it adds the data item’s address to a list of locally stored addresses. When the list gets long enough, all the chip’s cores pool their lists, group together those addresses that are near each other, and redistribute them to the cores. That way, each core requests only data items that it knows it needs and that can be retrieved efficiently.

In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages, MIT says. That could get even better, too, as the researchers work to improve the technology further. They’re presenting a paper on the project this week at the International Conference on Parallel Architectures and Compilation Techniques.

Source: InfoWorld Big Data

Beyond Solr: Scale search across years of event data

Beyond Solr: Scale search across years of event data

Every company wants to guarantee uptime and positive experiences for its customers. Behind the scenes, in increasingly complex IT environments, this means giving operations teams greater visibility into their systems — stretching the window of insight from hours or days to months and even multiple years. After all, how can IT leaders drive effective operations today if they don’t have the full-scale visibility needed to align IT metrics with business results?

Expanding the window of visibility has clear benefits in terms of identifying emerging problems anywhere in the environment, minimizing security risks, and surfacing opportunities for innovation. Yet it also has costs. From an IT operations standpoint, time is data: The further you want to see, the more data you have to collect and analyze. It is an enormous challenge to build a system that can ingest many terabytes of event data per day while maintaining years of data, all indexed and ready for search and analysis.

These extreme scale requirements, combined with the time-oriented nature of event data, led us at Rocana to build an indexing and search system that supports ever-growing mountains of operations data — for which general-purpose search engines are ill-suited. As a result, Rocana Search has proven to significantly outperform solutions such as Apache Solr in data ingestion. We achieved this without restricting the volume of online and searchable data, with a solution that balances responsively and scales horizontally via dynamic partitioning.

The need for a new approach

When your mission is to enable petabyte-level visibility across years of operational data, you face three primary scalability challenges:

  • Data ingestion performance: As the number of data sources monitored by IT operations teams grows, can the system continue to pull in data, index it immediately, store it indefinitely, and categorize it for faceting?
  • Volume of searchable data that can be maintained: Can the system keep all data indexed as the volume approaches petabyte scale, without pruning data at the cost of losing historical analysis?
  • Query speed: Can the index perform more complex queries without killing performance?

The major open source contenders in this search space are Apache Solr and Elasticsearch, which both use Lucene under the covers. We initially looked very closely at these products as potential foundations on which to build the first version of Rocana Ops. While Elasticsearch has many features that are relevant to our needs, potential data loss has significant implications for our use cases, so we decided to build the first version of Rocana Ops on Solr.

Solr’s scaling method is to shard the index, which splits the various Lucene indexes into a fixed number of separate chunks. Solr then spreads them out across a cluster of machines, providing parallel and faster ingest performance. At lower data rates and short data retention periods, Solr’s sharding model works. We successfully demonstrated this in production environments with limited data retention requirements. But the Lucene indexes still grow larger over time, presenting persistent scalability challenges and prompting us to rethink the best approach to search in this context.

Compare partitioning models

Like Elasticsearch and Solr, Rocana Search is a distributed search engine built on top of Lucene. The Rocana Search sharding model is significantly different from Elasticsearch and Solr. It creates new Lucene indexes dynamically over time, enabling customers to retain years of indexed data on disk and have it immediately accessible for query, while keeping each Lucene index small enough to maintain low-latency query times.

Why didn’t the Solr and Elasticsearch sharding models work for us? Both solutions have a fixed sharding model, where you specify the number of shards at the time the collection is created.

With Elasticsearch, changing the number of shards requires you to create a new collection and re-index all of your data. With Solr, there are two ways to grow the number of shards for a pre-existing collection: splitting shards and adding new shards. Which method you use depends on how you route documents to shards. Solr has two routing methods, compositeId (default) and implicit. With either method, large enterprise production environments will eventually hit practical limits for the number of shards in a single index. In our experience, that limit is somewhere between 600 and 1,000 shards per Solr collection.

Before the development of Rocana Search, Rocana Ops used Solr with implicit document routing. While this made it difficult to add shards to an existing index, it allowed us to build a time-based semantic partitioning layer on top of Solr shards, giving us additional scalability on query, as we didn’t have to route every query to all shards.

In production environments, our customers are ingesting billions of events per day, so ingest performance matters. Unfortunately, fixed shard models and very large daily data volumes do not mix well. Eventually you will have too much data in each shard, causing ingest and query to slow dramatically. You’re then left choosing between two bad options:

  1. Create more shards and re-index all data into them (as described above).
  2. Periodically prune data out of the existing shards, which requires deleting data permanently or putting it into “cold” storage, where it is no longer readily accessible for search.

Unfortunately, neither option suited our needs.

The advantages of dynamic sharding

Data coming into Rocana Ops is time-based, which allowed us to create a dynamic sharding model for Rocana Search. In the simplest terms, you can specify that a new shard be created every day on each cluster node: at 100 nodes, that’s 100 new shards every day. If your time partitions are configured appropriately, the dynamic sharding model allows the system to scale over time to retain as much data as you want to keep, while still achieving high rates of data ingest and ultra-low-latency queries. What allows us to utilize this strategy is a two-part sharding model:

  1. We create new shards over time (typically every day), which we call partitions.
  2. We slice each of those daily partitions into smaller pieces, and these slices correspond to actual Lucene directories.

Each node on the cluster will add data to a small number of slices, dividing the work of processing all the messages for a given day across an arbitrary number of nodes as shown in Figure 1.

rocana search partitions slices

Figure 1: Partitions and slices on Rocana Search servers. In this small example, two Rocana Search servers, with two slices (S) per node, have data spanning four time partitions. The number of partitions will grow dynamically as new data comes in.

Each event coming to Rocana Ops has a timestamp. For example, if the data comes from a syslog stream, we use the timestamp on the syslog record, and we route each event to the appropriate time partition based on that timestamp. All queries in Rocana Search are required to define a time range — any given window of time where an item of interest happened. When a query arrives, it will be parsed to determine which of the time partitions on the Rocana Search system are in scope. Rocana Search will then only search that subset.

Ingest performance

The difference in ingestion performance between Solr and Rocana Search is striking. In controlled tests with a small cluster, Rocana Search’s initial performance has proved significantly better — as much as two times — than Solr, and the performance gap grows significantly over time as the systems ingest more data. At the end of these tests, Rocana Search performs in the range of five to six times faster than Solr.

data ingestion comparison

Figure 2: Comparing data ingestion speed of Rocana Search versus Solr over a 48-hour period on the same four-DataNode Hadoop (HDFS) cluster. Rocana Search is able to ingest more than 12.5 billion events, versus 2.4 billion for Solr.

Event size and cardinality can significantly impact ingestion speed for both Solr and Rocana Search. Our tests include both fixed- and variable-sized data, and the results follow our predicted pattern: Rocana Search’s ingestion rate remains relatively steady while Solr’s decreases over time, mirroring what we’ve seen in much larger production environments.

Query performance

Rocana Search’s query performance is competitive with Solr and can outperform Solr while data ingestion is taking place. In querying for data with varying time windows (six hours, one day, three days), we see Solr returning queries quickly for the fastest 50 percent of the queries. Beyond this, Solr query latency starts increasing dramatically, likely due to frequent multisecond periods of unresponsiveness during data ingest.

query performance comparison

Figure 3: Comparing query latency of Rocana Search versus Solr. Query is for time ranges of six hours, one day, and three days, on a 4.2TB dataset on a four-DataNode Hadoop (HDFS) cluster.

Rocana Search’s behavior under ingest load is markedly different than that of Solr. Rocana Search’s query times are much more consistent, well into the 90th percentile of query times. Above the 50th percentile, Rocana Search’s query times edge out Solr across multiple query range sizes. There are several areas where we anticipate being able to extract additional query performance for Rocana Search as we iterate on the solution, which our customers are already using in production.

A solution for petabyte-scale visibility

Effectively managing today’s complex and distributed business environments requires deep visibility into the applications, systems, and networks that support them. Dissatisfaction with standard approaches led us to develop a unique solution that has already been put into production and been proven to work.

By leveraging the time-ordered nature of operations data and a dynamic sharding model built on Lucene, Rocana Search keeps index sizes reasonable, supports high-speed ingest, and maintains performance by restricting time-oriented searches to a subset of the full data set. As a result, Rocana Search is able to scale indexing and searching in a way that other potential solutions can’t match.

As a group of services coordinated across multiple Hadoop DataNodes, Rocana Search creates shards (partitions and slices) on the fly, without manual intervention, server restarts, or the need to re-index already processed data. Ownership of these shards can be automatically transferred to other Rocana Search nodes when nodes are added or removed from the cluster, requiring no manual intervention.

IT operations data has value. The amount of that data you keep should be dictated by business requirements, not the limitations of your search solution. When enterprises face fewer barriers to scaling data ingest and search, they are able to focus on how to search and analyze as much of their IT operations event data as they wish, for as long as they choose, rather than worrying about what data to collect, what to keep, how long to store it, and how to access in the future.

Brad Cupit and Michael Peterson are platform engineers at Rocana.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Source: InfoWorld Big Data

Report: Dell Technologies Challenging Leaders For Cloud Infrastructure Leadership

Report: Dell Technologies Challenging Leaders For Cloud Infrastructure Leadership

New Q2 data from Synergy Research Group shows that in the burgeoning market for technology to build clouds, the newly formed Dell Technologies is now chasing HPE, Cisco, and Microsoft for leadership in the three main segments — private cloud hardware, public cloud hardware, and cloud infrastructure software respectively. While the Dell EMC merger didn’t officially close until September, Dell and EMC in aggregate would have been ranked second in private cloud hardware and third in public cloud hardware based on worldwide Q2 revenues. Across all cloud hardware, HPE had the lead with 15%, closely followed by Cisco on 14% and Dell EMC 13%. Meanwhile Dell Technologies’ majority owned VMware subsidiary was ranked second in cloud infrastructure software. Other major cloud infrastructure vendors included IBM, Lenovo, Huawei, Oracle and NetApp. The growth rate for the total cloud infrastructure market dropped off a little in the quarter but on a rolling annualized basis it still grew by over 16%.

For the last nine quarters total spend on data center infrastructure — which includes servers, server OS, storage, networking, network security and virtualization software — has been running at an average $29 billion, with the market being increasingly driven by the cloud. Cloud deployments or shipments of systems that are cloud enabled now account for well over half of the total data center infrastructure market.

“While total spending on data center infrastructure remains relatively flat, cloud share of that spending continues to rise as an ever-increasing portion of computer workloads migrate to either public or private clouds,” said Jeremy Duke, founder and chief analyst, Synergy Research Group. “We are also seeing that within the cloud infrastructure market, hyperscale cloud operators are accounting for an ever-larger share of overall capex. This is a trend which is not going to change any time soon.”

Source: CloudStrategyMag

Teradata expands analytics for hybrid cloud

Teradata expands analytics for hybrid cloud

Analytics solutions provider Teradata has released new hybrid-cloud management capabilities to better compete with rising pressure from both open source and commercial solutions.

Teradata Everywhere expands support for existing cloud-hosted Teradata solutions and adds new hybrid and cross-cloud orchestration components that make it possible to manage Teradata instances across “on-premises appliances, on-premises virtualization environments, managed cloud, and public cloud,” according to the company’s announcement.

Teradata was previously available on Amazon Web Services, but the latest iteration provides up to 32 nodes per instance and conveniences like automated backup functionality. Later this year, Microsoft Azure is set to start running this iteration of Teradata, as are VMware environments, Teradata’s own Managed Cloud in Europe (Germany), and Teradata’s on-premises IntelliFlex platform. (Google Compute Engine support was not among the environments mentioned in the announcement.)

Other improvements in the works, but not slated to debut until next year, are features to allow expansion and rebalancing of data loads between Teradata instances without major downtime and a new “in-stream query re-planning” system designed to optimize queries as they are being executed.

Teradata’s plans involve more than providing a way to run cloud-hosted instances of its database on the infrastructure of one’s choices. Rather, the company says it hopes to make Teradata as “borderless,” or hybrid, as possible. Teradata QueryGrid and Teradata Unity are being revised to better support this goal.

One key change — managing Teradata instances across environments — is available now. But many of the others — for example, automatic capture and replay of selected data changes between Teradata systems or one-click database initialization across systems — are projected to be ready in 2017.

Though powerful, Teradata is facing stiffer competition. After Hadoop came to prominence as a commodity open source data-analysis solution, Teradata made use of it as a data source by way of the commercial MapR distribution.

Clouds such as Amazon Redshift or Microsoft’s Azure SQL also offer data warehousing solutions. Azure SQL has been enhanced by changes to SQL Server that encourage the bursting-to-the-cloud expansion that Teradata is now promising. There’s also pressure from new kinds of dedicated cloud services, such as Snowflake, which promises maximum performance with minimal oversight.

Source: InfoWorld Big Data

Global Clients Adopt IBM Aspera Hybrid Cloud Service For Large Media Transfer

Global Clients Adopt IBM Aspera Hybrid Cloud Service For Large Media Transfer

IBM has announced continued adoption of the company’s Aspera Files hybrid cloud-based content sharing service and enhancements to the platform that are enabling more media and entertainment companies around the world to speed content collaboration and distribution.

Based on Aspera FASP® technology, IBM Aspera Files™ is a Software-as-a-Service (SaaS) offering on IBM Cloud that accelerates the sharing and transfer of large files and directories – even the largest content files and associated metadata – directly from its native storage environment whether on premises or in the cloud. A multi-tenant solution, Aspera Files is designed to be easy to use and quickly available, providing transfers at maximal speeds in predictable times and offering a rich set of capabilities for sharing, distributing and managing large files.

The cloud service is offered as an all-inclusive platform hosted on the IBM Cloud with built in Aspera transfer service and IBM Cloud object storage. Files can also seamlessly connect Aspera transfer nodes and storage running on all major third party cloud infrastructure providers and storage deployed on customer premises to provide advanced security & user access controls, fast, direct transfer of content of any size, at any distance independent of storage location.

A new pay-as-you-go option enables customers small to large to take advantage of the power of the Aspera platform with a cost-effective offering that scales with their business, and the brandable workspace model accommodates the project nature of most digital media businesses. New customers such as Beelink Productions in Dubai, action concept in Germany and Outpost VFX in the UK are using Aspera Files to replace existing and often cumbersome content sharing techniques, and to exchange content with an ecosystem of other Aspera users.

“Media companies are moving fast, constantly innovating and looking for technologies that help them accelerate business,” said Michelle Munson, CEO of Aspera. “Aspera Files gives companies high speed sharing quickly, without having to provision infrastructure. In addition, they can quickly and easily leverage the rich features of the solution via an online trial sign up, and an online pay-as-you-go experience.”

More than 20 new capabilities have been introduced for Aspera Files since its launch, including:

  • One click sharing of folders with authenticated 3rd parties for upload and download and one click invitations to submit content and metadata as packages to branded dropboxes;
  • Media carousel previews to view and manage large numbers of video and photo files;
  • New Drive and Mobile Applications to browse, share, sync and send and receive packages from the desktop and to contribute content from iOS and Android devices;
  • Advanced Security for fine-grained control over folder sharing and package delivery with external recipients and enterprise single sign on;
  • Ultra-fast Auto-scaling transfers to Aspera clusters running in the cloud (10 Gbps).
  • Clients Move to Aspera Files

Beelink Productions
Headquartered in Dubai, Beelink produces and distributes exclusive content, mainly Arabic drama series, from their studios in Egypt. With a growing audience base, Beelink committed to offering 3 exclusive Arabic drama series made of 30 episodes each during Ramadan: Grand Hotel, Wanoos, Series and Heba Regel Elghourab. To meet an expanded production schedule, Beelink replaced its legacy HDD distribution, which often resulted in unacceptable delays, with Aspera Files. The company was in full production within hours, with nothing to provision or deploy except an install-on-demand browser plug-in.

Beelink organized episodes in folders on a per series basis and used the “share” facility with view-only permissions to distribute only authorized content to their customers required to login for content access audit purpose. Assets were uploaded from Egypt over a 50Mbps line and then downloaded by Beelink customers at the highest speed from Aspera Files, which is built on top of Aspera FASP high-speed transfer technology. Episodes reaching 17GB were uploaded to Files and downloaded within 25 mins by customers running 100Mb/s lines.

“We are pleased with the Files service and the support we received by the local team to get us started within hours,” said Hala Obied, business coordinator at Beelink. “The breadth of features and functions coupled with the simplicity of the user interface allowed our customers to pull down their assets in minutes.”

Action Concept 
Action Concept, a leading action film producer in Germany, reaches top markets in over 100 countries with prime-time productions. It has also established itself as a sought-after producer for commissioned productions and in-house formats. The company turned to Aspera Files for a recent project with a Chinese client that required the production company to move large volumes of video material between their German facility and a business partner in South Korea. Previously, the company used FTP or physical FEDEX shipments to share video, but due to security concerns and an explicit request from the customer to avoid these methods of transport, action concept sought a new solution that could provide greater security and faster transfer times. Aspera Files made it easy for action concept to send and receive very large files that can reach two terabytes per captured scene in 4k or 6k resolutions.

“We decided to explore the possibilities with Aspera in large part because of its solid reputation in the industry, as well as the high level of security, ease of use and speed offered by the solution,” said Tom Dülks, Head of Technology, action concept. “We were not disappointed.”

Outpost VFX  Outpost VFX is a high-end visual effects company with a diverse portfolio across feature film, broadcast and advertising, music promos and virtual reality. The nature of the business requires Outpost to send and receive large media files to and from clients on a regular basis. The company uses the dropbox function in Aspera Files to securely receive work from clients and subcontractors around the world, and once a shot is officially approved, they use Files’ digital package-sending tool to deliver completed projects directly to the client.

“Aspera Files provides an affordable solution with the flexibility to scale up as we grow. The tool covers all the bases from a security standpoint, and Aspera’s prominence with large studios grants us a reputation of gravitas when we use it – it shows that Outpost is a serious VFX supplier,” said Danny Duke, managing director, Outpost VFX. “We couldn’t be happier with our decision to select Aspera.”

Aspera is demonstrating Aspera Files and is entire suite of high-speed file transfer and streaming solutions at IBC2016 from September 9-13 in Amsterdam, Hall 7, Stand G20.

Source: CloudStrategyMag

Insight Releases Hybrid Cloud Assessment

Insight Releases Hybrid Cloud Assessment

In a recent IDC (International Data Corporation) Multi-Client Study, CloudView 2016, respondents to the survey said they expect to increase their cloud spending by approximately 44% over the next two years, and 70% of heavy cloud users are thinking in terms of a “hybrid” cloud strategy. 

To keep pace with the rapidly changing technological landscape, Insight released a new Hybrid Cloud Assessment service, which helps businesses navigate and take advantage of the complex hybrid cloud environment.

A combination of both public and private platforms, hybrid cloud provides organizations with greater IT and infrastructure flexibility, as well as visibility and control over their cloud usage. As a result, hybrid cloud enables business agility, including streamlined operations and improved cost management. Companies can now enter new markets or launch new products and services more quickly and efficiently in a highly competitive business environment.

“We built a methodology and tool set that allows us to assess a company’s full portfolio of applications and to provide the optimal deployment and consumption model for each client,” said Stan Lequin, VP, services, Insight. “This approach enables us to deliver a non-disruptive and customized Hybrid Cloud roadmap.”

The Hybrid Cloud Assessment provides a clear and unbiased guide for businesses to transition to the cloud, including design, deployment, and management.

“We developed this tool to allow us to efficiently evaluate workloads and determine where they are best deployed based on application dependency mapping, cloud consumption models, and a variety of additional factors,” said Lequin.

Insight takes into account distinct market drivers and challenges and tests every potential IT scenario to develop the right solutions to help clients accomplish their specific goals.

Source: CloudStrategyMag

Fuze Is Named to First-Ever Forbes 2016 World’s Best 100 Cloud Companies List

Fuze Is Named to First-Ever Forbes 2016 World’s Best 100 Cloud Companies List

Fuze (formerly ThinkingPhones), has announced it has been named to the first-ever Forbes 2016 Cloud 100, the definitive list of the top 100 private cloud companies in the world, developed in partnership with Bessemer Venture Partners.

 “We built our multi-tenant cloud platform from the ground up to be exceptionally agile, redundant, and secure without the restrictions of other UC services and the expense of on-premise designs. This enables our customers to deploy cloud communications quickly, easily, and cost effectively,” said Steve Kokinos, CEO, Fuze. “The scalable, flexible solution also allows customers to seamlessly add users from all corners of the globe, making Fuze the perfect fit for multi-location enterprises. Such adaptability, including a breadth of third-party application support, has been key to the platform’s adoption by global organizations and a large part of our continued industry recognition.”

“Cloud companies are revolutionizing how businesses reach their customers today from digitizing painful old processes to allowing them more time to focus on what they really care about — what makes their products unique,” said Alex Konrad, Forbes editor of the Cloud 100 list. “Inclusion in the Forbes 2016 Cloud 100 list recognizes a company for its financial growth and excellence as recognized by customers and peers.”

“These are the companies to watch!” said Byron Deeter, a leading cloud investor and partner at Bessemer Venture Partners. “The Forbes Cloud 100 companies represent the very best private companies in cloud computing. We will see big IPOs and category killers emerge from this list as cloud computing continues to propel the trillion-dollar software industry.”

“On behalf of Fuzers worldwide, we are thrilled to be named to the inaugural Forbes 2016 Cloud 100,” continued Kokinos. “Our mobile-first user experience is designed to delight today’s digitally empowered workforce, while our powerful suite of business analytics integrates with other cloud services to make our solution an indispensable tool for business and technology leaders.”

The list will appear in the Oct. 4, 2016 issue of Forbes magazine.

Methodology
The first-ever Forbes 2016 Cloud 100 list profiles the world’s top-tier private companies leading the cloud technology revolution, plus twenty rising stars within the field. With advancements in software, cloud security, or platform development, these companies are redefining the future for all industries and sectors.

Forbes, in partnership with Bessemer Venture Partners, received hundreds of submissions to identify the most promising private companies in cloud. The Forbes 2016 Cloud 100 was selected by a panel of judges representing leading public cloud companies, using qualitative and quantitative data submitted by nominees, along with publically available third-party data sources.

Recognition

Every company named to the Forbes 2016 Cloud 100 is recognized in print and online by Forbes, and Forbes’ partners Bessemer Venture Partners and Salesforce Ventures. The companies also receive physical awards and digital badges signifying their inclusion on this exclusive list, as well as an invitation to the celebratory Cloud 100 Awards Dinner, hosted in San Francisco by Forbes, Bessemer Venture Partners and Salesforce Ventures.

Source: CloudStrategyMag

Equinix Introduces The Media Cloud Ecosystem For The Entertainment Industry

Equinix Introduces The Media Cloud Ecosystem For The Entertainment Industry

Equinix, Inc. has announced the Equinix Media Cloud Ecosystem for Entertainment (EMCEE™), an ecosystem of interconnected media and content providers, along with content delivery networks (CDNs) and cloud service providers that optimizes content creation, global distribution, and services across the entire media and entertainment (M&E) industry. Today, more than 500 content and media companies such as Content Bridge, Movile, and Selevision use EMCEE to peer with the industry’s largest concentration of CDNs, multiple system operators (MSOs) and social media platforms, enabling faster content development and distribution, as well as significant cost savings.

Digital disruption is affecting the M&E industry at an ever-accelerating pace – changing the way that content is created, enhanced, transported, stored and distributed. To embrace this disruption, M&E companies need to transform their infrastructure, from fixed and siloed to integrated and dynamic with interconnection at the forefront of their IT decision making. Global businesses, including media and entertainment companies, are increasingly leveraging colocation data centers to distribute their digital infrastructure across multiple geographies, and closer to the edge, to solve these challenges while also optimizing their IT for cloud-based offerings.

Components of the EMCEE ecosystem that enable this transformation include Equinix interconnection offerings across Platform Equinix™, network density, access to multiple clouds utilizing the Equinix Cloud Exchange™ and access to billions of consumers leveraging CDNs in Equinix International Business Exchange (IBX) data centers. In tests conducted in Equinix’s global Solution Validation Centers™, video streaming applications that flowed through Equinix experienced 47% lower network latency. The test results also show that Equinix customers save, on average, more than 25% on network bandwidth costs by aggregating Internet traffic delivery to improve performance and scalability.

Today’s consumers expect reliable, on-demand access to bandwidth-heavy digital content such as video, apps and online games. To meet consumer expectation, digital media and entertainment companies need an interconnected neutral ecosystem of content companies, advertising networks and content delivery services, accessible via secure, direct connections.

As media becomes commoditized in the digital era, new business models are increasingly focused on innovation and efficiency across the production cycle – and value creation at the point of engagement, where end-users expect high quality service on every device, all the time, everywhere. To capture the opportunity, businesses are streamlining production workflows, reducing time and cost, and expanding distribution capabilities to tap into billions of smart TVs and devices around the globe.

Built on Interconnection Oriented Architecture™ (IOA™), EMCEE efficiently improves network and application performance, security and end-user satisfaction. IOA directly and securely interconnects clouds, networks, business ecosystems and data at the edge, providing virtual control and transparency across the world’s most globally interconnected data centers, within the largest cloud and network provider-neutral marketplaces.

Equinix global interconnection platform also provides media and entertainment companies with industry leading solutions including Equinix Cloud Exchange which provides direct access major cloud service providers including AWS, Google Cloud Platform, Microsoft Azure and Office 365 and IBM Softlayer in 21 markets globally, and Equinix Performance Hub™ and Equinix Data Hub™ which help develop faster, more efficient content creation workflows.

Equinix will be presenting EMCEE at the IBC 2016 Conference and Exhibition in Amsterdam from September 9-13 at booth B25, Hall 3.

Source: CloudStrategyMag