Microsoft Azure ExpressRoute Now Available Across Seven CoreSite Markets

Microsoft Azure ExpressRoute Now Available Across Seven CoreSite Markets

CoreSite Realty Corporation has announced the expanded availability of Microsoft Azure ExpressRoute, which can now be privately accessed from seven of CoreSite’s markets across the country, including Northern Virginia, Chicago, Silicon Valley, Denver, Los Angeles, New York, and Boston.

CoreSite customers can privately connect to Microsoft Azure, Office 365 and Dynamics 365 via the CoreSite Open Cloud Exchange, which provides high-performance, SLA-backed virtual connections and on-demand provisioning. The integration of Azure ExpressRoute and the CoreSite Open Cloud Exchange provides CoreSite customers with the opportunity to establish a fast and reliable private connection into Microsoft Azure, Office 365 and Dynamics 365. With an Azure ExpressRoute connection, customers have a natural extension of their data centers and can build hybrid applications that span on-premises infrastructure and Microsoft Cloud services without compromising privacy or performance.

CoreSite customers can efficiently transfer large data sets for high-performance computing, migrate virtual machines between dev-test environments in Azure to production environments housed in a CoreSite data center and optimize replication for business continuity, disaster recovery, and other high-availability strategies.

“We are excited to announce the expanded availability of Microsoft Azure ExpressRoute connectivity to our customers across seven of our key markets,” said Brian Warren, senior vice president of engineering & products at CoreSite. “We are enabling our customers with the solutions necessary to bring together all of their applications, data, devices, and resources, both on-premise and in the cloud, with predictable, reliable, and secure high-throughput connections.”

Source: CloudStrategyMag

Rackspace Expands Private Cloud Capabilities

Rackspace Expands Private Cloud Capabilities

Rackspace® has announced the general availability of Rackspace Private Cloud powered by VMware®, which will now be built on VMware Cloud Foundation™. With this, customers can enhance the foundational technology that is enabling their move out of the data center and into the cloud with the newest VMware capabilities. Rackspace Private Cloud powered by VMware built on VMware Cloud Foundation will enable full software defined data center (SDDC) capabilities including compute, storage and networking that span the public and private cloud.

VMware Cloud Foundation accelerates IT’s time-to-market by providing a factory-integrated cloud infrastructure stack that is simple to use and includes a complete set of software-defined services for compute, storage, networking and security. Rackspace Private Cloud powered by VMware helps businesses maximize their VMware deployments by helping build, operate and optimize customers’ physical and virtual infrastructure, freeing IT resources from day-to-day infrastructure management so they can focus on their core business. Rackspace is one of the largest global providers in the VMware Cloud Provider™ Program and has partnered with VMware for more than 10 years delivering valuable solutions for mutual customers.

Built on VMware Cloud Foundation, Rackspace Private Cloud provides mutual customers with enhanced capabilities and management benefits including:

  • Standardized Architecture: Rackspace Private Cloud powered by VMware is built on VMware Validated Designs, which are based on best practices, making deployments more predictable and lower risk.
  • Continuous Updates and Lifecycle Management: Continuous updates allow for the most up-to-date VMware capabilities through lifecycle management of VMware components, thereby helping to improve users’ security posture.
  • Leverage Existing VMware Investments: Users leverage the control, flexibility and choice needed to run VMware as easily as they would in their own data center.IT departments can migrate or extend to the VMware cloud with consistent tooling and skills. Consistent infrastructure architecture can be leveraged across multiple locations without the need to refactor code. Mutual customers maintain value of existing investments made in training, VMware technology and familiar tools by accelerating adoption of software-defined infrastructure.
  • Offload Physical and Virtual Infrastructure Operations: Rackspace delivers a hosted model, which eliminates many of the procurement and integration challenges that IT organizations face in their own data centers. Mutual customers also benefit from the ability to scale their solution quickly and as needed without the need for significant upfront CAPEX investments in data centers and hardware.
  • Managed by Rackspace, Powered by VMware: With Rackspace Private Cloud powered by VMware, customers have access to Fanatical Support® provided 24x7x365 from more than 150 VMware Certified Professionals (VCPs) to help migrate, architect, secure and operate Rackspace hosted clouds powered by VMware technologies.

“Provisioning hardware quickly is no longer considered a value for customers, it’s expected,” said Peter FitzGibbon, vice president and general manager of VMware at Rackspace. “The enhancement in our VMware private cloud delivery model through VMware Cloud Foundation will provide further value to new and existing Rackspace Private Cloud powered by VMware customers by giving them access to the most streamlined and innovative VMware SDDC capabilities and lifecycle management. We are excited to use VMware Cloud Foundation and look forward to continued innovation on the platform.”

“With a decade of proven success in helping customers meet their business demands, VMware and Rackspace are taking another step together to help mutual customers dramatically shorten the path to hybrid cloud,” said Geoffrey Waters, vice president of Global Cloud Sales at VMware. “VMware Cloud Foundation is the industry’s most advanced cloud infrastructure platform that unlocks the benefits of hybrid cloud by establishing a common, simple operational model across private and public clouds. Together with Rackspace and its renowned Fanatical Support, we will add great value to mutual customers in their digital transformation journey.”

Source: CloudStrategyMag

ByteGrid Chosen By Re-Quest, Inc. For Highly Secure Hosting Solutions

ByteGrid Chosen By Re-Quest, Inc. For Highly Secure Hosting Solutions

ByteGrid Holdings LLC has announced an agreement with Re-Quest, Inc. to provide highly secured technical expertise supporting Re-Quest’s delivery of Oracle hybrid cloud solutions for their customers.

Re-Quest has been successfully assisting customers around the world since 1991, leveraging their investment in Oracle Technology and Infrastructure assets to gain higher returns on investment, lower total cost of ownership and measurable improvement in their business processes.

“Re-Quest prides itself on the high level of business process and technical expertise we bring to every client engagement,” said Ron Zapar, CEO. “Which is why we chose to partner with ByteGrid, providing our customers with high value services across a complete spectrum of Oracle Hybrid Cloud solutions.”

“We know it’s important for Re-Quest to provide their customers with the technical perspective to implement projects that deliver complete customer satisfaction and partner success,” said Jason Silva, ByteGrid’s CTO. “We’re proud to partner with Re-Quest to ensure they’re successful in bringing that satisfaction by hosting their Oracle Hybrid Cloud technology solutions.”

In addition to this new agreement with Re-Quest, ByteGrid serves some of the world’s largest companies and government agencies, including numerous Fortune 50 companies.

Source: CloudStrategyMag

Rob Kakareka Joins Qligent As Manager Of Business Development

Rob Kakareka Joins Qligent As Manager Of Business Development

Qligent has announced that Robert “Rob” J. Kakareka has joined the company as its new manager/business development. A broadcast industry veteran with extensive sales experience, Kakareka is tasked with developing U.S. sales, customer relationships and market opportunities.

Kakareka reports directly to John Shoemaker, Qligent’s director of sales. Atlanta-based Kakareka will focus on selling the company’s innovative, Vision cloud-based monitoring and compliance platform to U.S. broadcasters, including major networks and call-letter stations.

Qligent’s Vision platform gathers and analyzes data from high-end probes that monitor distinct points along the distribution signal path, out to the last mile. This data enables broadcasters to ascertain that they are delivering an optimal Quality of Experience (QoE) for their viewers, and pinpoint technical issues they need to address.   

“I’m excited to be promoting the value and benefits of Qligent’s flagship product, Vision, at a time of rapid change in the broadcast industry,” said Kakareka. “Vision is uniquely positioned to support mission-critical broadcast distribution in a cost-efficient SaaS model as the industry expands from traditional over-the-air, cable and satellite channels to new digital, mobile and over-the-top (OTT) outlets.

“Despite this dramatic IP-centric shift, the broadcast industry remains a close-knit community with unique requirements and workflows,” Kakareka continued. “My goal is to show broadcasters that not only is our technology exceptional, but we have their backs as they venture into new and emerging market opportunities — including a true Monitoring as a Service business model that offloads monitoring, analysis and troubleshooting responsibilities to our managed services layer.”

With a career spanning over 20 years, Kakareka is no stranger to the broadcast industry, having held strategic sales and business development positions for many high-profile brands. These prior posts include Avid (Orad) Graphics Systems (from February 2014 to February 2016), Miranda (February 2012 to March 2014), Pixel Power (February 2008 to February 2012) and BarcoNet (February 2001 to February 2002).

In these national sales roles, Kakareka regularly outperformed sales quotas, broadened customer bases, boosted sales revenues, and built strong customer relationships with broadcasters nationwide. He’s also knowledgeable in all aspects of broadcast television operations, including graphics and virtual reality studio workflows, SaaS digital media services, big data-scaled storage, TV/film production and OTT/cloud workflows.

Kakareka has also tackled complex business development challenges, such as developing new business for Comprehensive Technical Group, while creating new business plans for this system integration firm’s existing clients. While working for systems integrator Technical Innovations/Broadcast Solutions Group (from February 2002 to February 2008), he implemented a sales plan for the rollout of ATSC compliant DTV systems, sold and integrated them at hundreds of stations across North America, among other sales achievements. 

 “In his stellar career, Rob has witnessed this industry’s many transitions firsthand, and that experience will be especially valuable as we engage with broadcasters to demonstrate how our unique, groundbreaking cloud software can solve today’s ‘uncontained’ distribution challenges,” said Shoemaker. “Our company has experienced rapid growth in a short time, and we’re confident that Rob’s industry expertise, insight and track record will help us capitalize on this momentum and significantly expand our U.S. customer base.”    

Source: CloudStrategyMag

IDG Contributor Network: Responsible retail: treating customer data with care

IDG Contributor Network: Responsible retail: treating customer data with care

Retailers have become so adept at capturing and analyzing consumer data that there is now a real risk that they might alienate customers by revealing just much they know about our lifestyle, habits, and preferences. So if retailers want their big data investments to pay off, they must tread carefully. 

Big data exploitation in retail is no longer restricted to tracking and responding to broad trends; it’s become very personal. Which is great if the result is that customers find exactly what they were looking for; less so if it feels intrusive or invasive.

Analytics technology is now so sophisticated that, by drawing on an individual’s loyalty-card records, payment histories and browsing habits, retail marketing programs can detect an alcohol problem, whether someone has lost their job (because spending drops and premium brands are replaced by “value” purchases), if they’re away on holiday, and much more besides. (A few years ago, Target worked out that a teenage girl was pregnant before she knew herself.) 

This is not to imply that retailers are necessarily doing anything wrong or sinister (customers may well have given consent for this kind of data usage). But it can be unnerving to think that every time we browse online or in a store, that activity is being monitored to build a picture of our entire lives. Just think how often we are pestered with unsolicited promotions related to a product we may have glanced at only once.

Even in Europe, where measures to protect consumer privacy are fairly robust, customers are now being tracked via their mobiles as they enter or pass by stores. Their activity can be registered—even if they don’t have a loyalty card or store app. In the US, meanwhile, regulations are becoming looser rather than more stringent now that safeguards protecting internet search histories are being dismantled. So the scope for overstepping the mark is growing.

Snooping vs. problem-solving

If retailers want to impress and retain customers, rather than undermine their trust, they need to turn their attention to more beneficial ways of applying algorithms and data discovery.

In fashion, retailers are exploring ways of minimizing sales returns—a problem so costly across e-commerce that the likes of Amazon have gone so far as banning customers who do this too often. In the US alone, merchandise returns were valued at $260.5 billion in 2015, roughly 8 percent of total sales, according to the National Retail Federation. Returns are a pain for customers, too: who wants the disappointment and hassle of having to send something back because it’s not quite right? A common cause of apparel returns is over-ordering, because consumers haven’t been confident of getting the right size; this is something the industry is now trying to address with new combinations of technology and new data insight.

Another option is to use customer intelligence to provide a more responsive logistics service. Amazon has patented a shipping model that anticipates what goods certain customers are going to order, so it can have the products waiting in a nearby warehouse for faster delivery. Combine this type of strategy with automated drone deliveries and the customer experience might soar while the cost of logistics (even the need for delivery partners) diminishes.

Greater empathy, better service

To the customer, real service innovation reduces the sense of being spied upon because of the perceived personal benefit. The end justifies the means. Just as, if I go to my regular bar, it suits me that they’ll have my favorite drink ready for me before I’ve even taken a seat because of how well they know me. Though if that happened in a bar I’d never been to before, that would be unsettling. Context—and consent—matter.

If the result of deeper customer insight is something genuinely useful to the consumer, surrendering anonymity and sharing data becomes a lot more palatable. People do appreciate easier access to the items they want, it does make their life easier if they don’t have to parcel up returns, and a timely recommendation can be useful in the right circumstances. So really, retailers just need to be a bit more thoughtful about how they apply their knowledge.

What isn’t in dispute is the strategic value of data. Figures from Gallup Behavioral Economics suggest that organizations that are able to exploit customer behavioral insights outperform their peers by 85 percent in sales growth, and more than 25 percent in gross margin. So keep building those data vaults and adding ever more sophisticated real-time analytics; the rest is down to using the insights to best effect.

This article is published as part of the IDG Contributor Network. Want to Join?

Source: InfoWorld Big Data

13 frameworks for mastering machine learning

13 frameworks for mastering machine learning

H2O, now in its third major revision, provides access to machine learning algorithms by way of common development environments (Python, Java, Scala, R), big data systems (Hadoop, Spark), and data sources (HDFS, S3, SQL, NoSQL). H2O is meant to be used as an end-to-end solution for gathering data, building models, and serving predictions. For instance, models can be exported as Java code, allowing predictions to be served on many platforms and in many environments.

H2O can work as a native Python library, or by way of a Jupyter Notebook, or by way of the R language in R Studio. The platform also includes an open source, web-based environment called Flow, exclusive to H2O, which allows interacting with the dataset during the training process, not just before or after. 

Source: InfoWorld Big Data

EvoSwitch Releases White Paper

EvoSwitch Releases White Paper

EvoSwitch has released a new white paper titled: ‘How to Build a Better Cloud –Planning.’ Aimed at CIOs, CTOs and IT Directors, the white paper provides expert-input to a business-driven planning process for weighing multi-cloud environments and implementing a hybrid cloud strategy.

As a colocation services provider with its data centers located in Amsterdam, the Netherlands, and Manassas (Washington DC area) in the U.S., EvoSwitch serves a considerable amount of clients with hybrid cloud needs. That’s why the colocation company established its cloud marketplace, EvoSwitch OpenCloud, two years ago. Through this marketplace, EvoSwitch customers would be able to quickly and securely interconnect to a large number of other cloud platforms including AWS, Google and Azure.

Partly based on these OpenCloud, hybrid cloud customer experiences as well as cloud management expertise of the author himself, the EvoSwitch white paper released today provides CIOs, CTOs, and IT Directors with business-driven guidance for successfully planning their hybrid cloud strategy. Titled ‘How to Build a Better Cloud –Planning,’ the white paper is written by seasoned data center services and cloud computing professional, Patrick van der Wilt, who serves as the commercial director for EvoSwitch.

EvoSwitch’s new white paper ‘How to Build a Better Cloud –Planning’ counts 30 pages and is available in English. It can be downloaded for free here.

Source: CloudStrategyMag

How to use Apache Kafka messaging in .Net

How to use Apache Kafka messaging in .Net

Apache Kafka is an open source, distributed, scalable, high-performance, publish-subscribe message broker. It is a great choice for building systems capable of processing high volumes of data. In this article we’ll look at how we can create a producer and consumer application for Kafka in C#.

To get started using Kafka, you should download Kafka and ZooKeeper and install them on your system. This DZone article contains step-by-step instructions for setting up Kafka and ZooKeeper on Windows. When you have completed the setup, start ZooKeeper and Kafka and meet me back here.

Apache Kafka architecture

In this section, we will examine the architectural components and related terminology in Kafka. Basically, Kafka consists of the following components:

  • Kafka Cluster—a collection of one or more servers known as brokers
  • Producer – the component that is used to publish messages
  • Consumer – the component that is used to retrieve or consume messages
  • ZooKeeper – a centralized coordination service used to maintain configuration information across cluster nodes in a distributed environment

The fundamental unit of data in Kafka is a message. A message in Kafka is represented as a key-value pair. Kafka converts all messages into byte arrays. It should be noted that communications between the producers, consumers, and clusters in Kafka use the TCP protocol. Each server in a Kafka cluster is known as a broker. You can scale Kafka horizontally simply by adding additional brokers to the cluster.

The following diagram illustrates the architectural components in Kafka – a high level view.

apache kafka architectureApache FOUNDATION

A topic in Kafka represents a logical collection of messages. You can think of it as a feed or category to which a producer can publish messages. Incidentally, a Kafka broker contains one or more topics that are in turn divided into one or more partitions. A partition is defined as an ordered sequence of messages. Partitions are the key to the ability of Kafka to scale dynamically, as partitions are distributed across multiple brokers.

You can have one or more producers that push messages into a cluster at any given point of time. A producer in Kafka publishes messages into a particular topic, and a consumer subscribes to a topic to receive the messages.

Choosing between Kafka and RabbitMQ

Both Kafka and RabbitMQ are popular open source message brokers that have been in wide use for quite some time. When should you choose Kafka over RabbitMQ? The choice depends on a few factors.

RabbitMQ is a fast message broker written in Erlang. Its rich routing capabilities and ability to offer per message acknowledgments are strong reasons to use it. RabbitMQ also provides a user-friendly web interface that you can use to monitor your RabbitMQ server. Take a look at my article to learn how to work with RabbitMQ in .Net.  

However, when it comes to supporting large deployments, Kafka scales much better than RabbitMQ – all you need to do is add more partitions. It should also be noted that RabbitMQ clusters do not tolerate network partitions. If you plan on clustering RabbitMQ servers, you should instead use federations. You can read more about RabbitMQ clusters and network partitions here.

Kafka also clearly outshines RabbitMQ in performance. A single Kafka instance can handle 100K messages per second, versus closer to 20K messages per second for RabbitMQ. Kafka is also a good choice when you want to transmit messages at low latency to support batch consumers, assuming that the consumers could be either online or offline.

Building the Kafka producer and Kafka consumer

In this section we will examine how we can build a producer and consumer for use with Kafka. To do this, we will build two console applications in Visual Studio – one of them will represent the producer and the other the consumer. And we will need to install a Kafka provider for .Net in both the producer and the consumer application.

Incidentally, there are many providers available, but in this post we will be using kafka-net, a native C# client for Apache Kafka. You can install kafka-net via the NuGet package manager from within Visual Studio. You can follow this link to the kafka-net GitHub repository.

Here is the main method for our Kafka producer:

static void Main(string[] args)
{
string payload ="Welcome to Kafka!";
string topic ="IDGTestTopic";
Message msg = new Message(payload);
Uri uri = new Uri(“http://localhost:9092”);
var options = new KafkaOptions(uri);
var router = new BrokerRouter(options);
var client = new Producer(router);
client.SendMessageAsync(topic, new List<Message> { msg }).Wait();
Console.ReadLine();
}

And here is the code for our Kafka consumer:

static void Main(string[] args)
{
string topic ="IDGTestTopic";
Uri uri = new Uri(“http://localhost:9092”);
var options = new KafkaOptions(uri);
var router = new BrokerRouter(options);
var consumer = new Consumer(new ConsumerOptions(topic, router));
foreach (var message in consumer.Consume())
{
Console.WriteLine(Encoding.UTF8.GetString(message.Value));
}
Console.ReadLine();
}

Note that you should include the Kafka namespaces in both the producer and consumer applications as shown below.

using KafkaNet;
using KafkaNet.Model;
using KafkaNet.Protocol;

Finally, just run the producer (producer first) and then the consumer. And that’s it! You should see the message “Welcome to Kafka!” displayed in the consumer console window.

While we have many messaging systems available to choose from—RabbitMQ, MSMQ, IBM MQ Series, etc.—Kafka is ahead of the pack for dealing with large streams of data that can originate from many publishers. Kafka is often used for IoT applications and log aggregation and other use cases that require low latency and strong message delivery guarantees.

If your application needs a fast and scalable message broker, Kafka is a great choice. Stay tuned for more posts on Kafka in this blog.

Source: InfoWorld Big Data

What is machine learning? Software derived from data

What is machine learning? Software derived from data

You’ve probably encountered the term “machine learning” more than a few times lately. Often used interchangeably with artificial intelligence, machine learning is in fact a subset of AI, both of which can trace their roots to MIT in the late 1950s.

Machine learning is something you probably encounter every day, whether you know it or not. The Siri and Alexa voice assistants, Facebook’s and Microsoft’s facial recognition, Amazon and Netflix recommendations, the technology that keeps self-driving cars from crashing into things – all are a result of advances in machine learning.

While still nowhere near as complex as a human brain, systems based on machine learning have achieved some impressive feats, like defeating human challengers at chess, Jeopardy, Go, and Texas Hold ‘em.

Dismissed for decades as overhyped and unrealistic (the infamous ”AI winter”), both AI and machine learning have enjoyed a huge resurgence over the last few years, thanks to a number of technological breakthroughs, a massive explosion in cheap computing horsepower, and a bounty of data for machine learning models to chew on.

Self-taught software

So what is machine learning, exactly? Let’s start by noting what it is not: a conventional, hand-coded, human-programmed computing application.

Unlike traditional software, which is great at following instructions but terrible at improvising, machine learning systems essentially code themselves, developing their own instructions by generalizing from examples.

The classic example is image recognition. Show a machine learning system enough photos of dogs (labeled “dogs”), as well as pictures of cats, trees, babies, bananas, or any other object (labeled “not dogs”), and if the system is trained correctly it will eventually get good at identifying canines, without a human being ever telling it what a dog is supposed to look like.

The spam filter in your email program is a good example of machine learning in action. After being exposed to hundreds of millions of spam samples, as well as non-spam email, it has learned to identify the key characteristics of those nasty unwanted messages. It’s not perfect, but it’s usually pretty accurate.

Supervised vs. unsupervised learning

This kind of machine learning is called supervised learning, which means that someone exposed the machine learning algorithm to an enormous set of training data, examined its output, then continuously tweaked its settings until it produced the expected result when shown data it had not seen before. (This is analogous to clicking the “not spam” button in your inbox when the filter traps a legitimate message by accident. The more you do that, the more the accuracy of the filter should improve.)

The most common supervised learning tasks involve classification and prediction (i.e, “regression”). Spam detection and image recognition are both classification problems. Predicting stock prices is a classic example of a regression problem.

A second kind of machine learning is called unsupervised learning. This is where the system pores over vast amounts of data to learn what “normal” data looks like, so it can detect anomalies and hidden patterns. Unsupervised machine learning is useful when you don’t really know what you’re looking for, so you can’t train the system to find it.

Unsupervised machine learning systems can identify patterns in vast amounts of data many times faster than humans can, which is why banks use them to flag fraudulent transactions, marketers deploy them to identify customers with similar attributes, and security software employs them to detect hostile activity on a network.

Clustering and association rule learning are two examples of unsupervised learning algorithms. Clustering is the secret sauce behind customer segmentation, for example, while association rule learning is used for recommendation engines.

Limitations of machine learning

Because each machine learning system creates its own connections, how a particular one actually works can be a bit of a black box. You can’t always reverse engineer the process to discover why your system can distinguish between a Pekingese and a Persian. As long as it works, it doesn’t really matter.

But a machine learning system is only as good as the data it has been exposed to – the classic example of “garbage in, garbage out.” When poorly trained or exposed to an insufficient data set, a machine learning algorithm can produce results that are not only wrong but discriminatory.

HP got into trouble back in 2009 when facial recognition technology built into the webcam on an HP MediaSmart laptop was able to unable to detect the faces of African Americans. In June 2015, faulty algorithms in the Google Photos app mislabeled two black Americans as gorillas

Another dramatic example: Microsoft’s ill-fated Taybot, a March 2016 experiment to see if an AI system could emulate human conversation by learning from tweets. In less than a day, malicious Twitter trolls had turned Tay into a hate-speech-spewing chat bot from hell. Talk about corrupted training data.

A machine learning lexicon

But machine learning is really just the tip of the AI berg. Other terms closely associated with machine learning are neural networks, deep learning, and cognitive computing.

Neural network. A computer architecture designed to mimic the structure of neurons in our brains, with each artificial neuron (microcircuit) connecting to other neurons inside the system. Neural networks are arranged in layers, with neurons in one layer passing data to multiple neurons in the next layer, and so on, until eventually they reach the output layer. This final layer is where the neural network presents its best guesses as to, say, what that dog-shaped object was, along with a confidence score.

There are multiple types of neural networks for solving different types of problems. Networks with large numbers of layers are called “deep neural networks.” Neural nets are some of the most important tools used in machine learning scenarios, but not the only ones.

Deep learning. This is essentially machine learning on steroids, using multi-layered (deep) neural networks to arrive at decisions based on “imperfect” or incomplete information. The deep learning system DeepStack is what defeated 11 professional poker players last December, by constantly recomputing its strategy after each round of bets. 

Cognitive computing. This is the term favored by IBM, creators of Watson, the supercomputer that kicked humanity’s ass at Jeopardy in 2011. The difference between cognitive computing and artificial intelligence, in IBM’s view, is that instead of replacing human intelligence, cognitive computing is designed to augment it—enabling doctors to diagnose illnesses more accurately, financial managers to make smarter recommendations, lawyers to search caselaw more quickly, and so on.

This, of course, is an extremely superficial overview. Those who want to dive more deeply into the intricacies of AI and machine learning can start with this semi-wonky tutorial from the University of Washington’s Pedro Domingos, or this series of Medium posts from Adam Geitgey, as well as “What deep learning really means” by InfoWorld’s Martin Heller.

Despite all the hype about AI, it’s not an overstatement to say that machine learning and the technologies associated with it are changing the world as we know it. Best to learn about it now, before the machines become fully self-aware.

Source: InfoWorld Big Data

Equinix Collaborates With SAP

Equinix Collaborates With SAP

Equinix, Inc. has announced that it is offering direct and private access to the SAP® Cloud portfolio, including SAP HANA® Enterprise Cloud and SAP Cloud Platform, in multiple markets across the globe. Dedicated, private connections are available via Equinix Cloud Exchange™ and the SAP Cloud Peering service in the Equinix Amsterdam, Frankfurt, Los Angeles, New York, Silicon Valley, Sydney, Toronto and Washington, D.C. International Business Exchange™ (IBX®) data centers, with additional markets planned for later this year. Through this connectivity, enterprise customers benefit from high-performance and secure access to SAP cloud services as part of a hybrid or multi-cloud strategy.

“Equinix recognizes that enterprise cloud needs vary, and by aligning a company’s business requirements to the best cloud services, they can create a more agile, flexible and scalable IT infrastructure.  With more than 130 million cloud subscribers, SAP has a strong foothold in the enterprise market, and by providing these customers and more with dedicated connectivity to their SAP software environments simply, securely and cost-effectively from Equinix Cloud Exchange, we help customers connect and build a hybrid cloud solution that works for them,” said Charles Meyers, president of strategy, services and innovation, Equinix.

As cloud adoption continues to rise, so does the growth of multi-cloud deployments. In fact, according to the recent IDC CloudView survey, 85% of respondents are either currently using a multi-cloud strategy or plan to do so in the near-term.* Equinix Cloud Exchange, with direct access to multiple cloud services and platforms, such as the SAP Cloud portfolio, helps enterprise customers to expedite the development of hybrid and multi-cloud solutions across multiple locations, with the goal of gaining global scale, performance and security.

SAP Cloud Peering provides direct access inside the Equinix Cloud Exchange to help customers looking to reap the benefits of the SAP Cloud portfolio, with the control and predictability of a dedicated connection. Initially, access will be available for SAP HANA Enterprise Cloud and SAP Cloud Platform, which serve as SAP’s IaaS and PaaS solutions respectively. SAP and Equinix plan to make available SAP SuccessFactors®, SAP Hybris®, SAP Ariba®solutions and others in the near future.

“SAP joined the Equinix Cloud Exchange platform to address customer requirements for enterprise hybrid architecture in an environment that lends itself to the very highest levels of performance and reliability. With SAP’s traditional base of more than 300,000 software customers seeking ways to take the next step in a cloud-enabled world, SAP has established efficient capabilities to deliver on those requirements,” said Christoph Boehm senior vice president and head of Cloud Delivery Services, SAP.

SAP continues to gain traction in enterprise cloud adoption, with particular strength in APAC and EMEA. According to a recent 451 Research** note, SAP’s APAC cloud subscription and support revenue grew by 54%, while it rose by 35% in EMEA and by 27% in the Americas.  Access to these cloud-based services in Equinix’s global footprint of data centers will help drive adoption and reach of SAP cloud offerings.

Equinix offers the industry’s broadest choice in cloud service providers, including AWS, Microsoft Azure, Oracle, Google Cloud Platform, and other leading cloud providers such as SAP.  Equinix offers direct connections to many of these platforms via Equinix Cloud Exchange or Equinix Cross Connects. Equinix Cloud Exchange is an advanced interconnection solution that provides virtualized, private direct connections that bypass the Internet to provide better security and performance with a range of bandwidth options. It is currently available in 23 markets, globally.

 

*Source:  IDC CloudView Survey, April 2017. N=6084 worldwide respondents, weighted by country, industry and company size. 
**Source: 451 Research, “SAP hits Q4 and FY2016 targets as cloud subscription/support revenue jumps 31%,” February 1, 2017

Source: CloudStrategyMag