Microservices: What They Mean and How They Impact the Channel

Microservices: What They Mean and How They Impact the Channel

“Microservices” is fast becoming one of the newest buzzwords that IT decision makers need to know as DevOps redefines modern software application delivery. Here’s a primer on what microservices mean and how the concept is affecting the channel.

If you’re a programmer, you’re probably already familiar with the idea of microservices. As the term implies, the concept centers around breaking down software into small parts that interact with one another in modular ways.

If that sounds like the design principles behind Unix-like operating systems, or object-oriented programming, it’s because it is. Microservices are really just an expansion of these programming practices into a broader context.

But now, in the modern, DevOps-centric application delivery landscape, microservices are at the center of everything.

Microservices and the Channel

So that’s what microservices mean if you’re a programmer. But what about people who simply work with software but don’t write it?

For the channel as a whole, microservices introduce a new calculus to the way companies develop and integrate software. In particular, microservices mean that:

  • Software components are more interchangeable than ever. Gone are the “monolithic” software architectures of the past, which made integration difficult. With microservices, it’s easy for vendors to break software down into small parts and arrange those parts in different combinations in order to build integrated products.
  • Software delivery is faster. One of the chief selling points of a microservices architecture (and of DevOps in general) is that it speeds up software development and delivery by making it continuous. The result is an expectation on the part of partners and customers that products will be released early and often, rather than according to the lengthy, delay-prone development schedules associated with “waterfall” software development.
  • Partners have more choices. Because microservices break software stacks down into many small, interchangeable parts, organizations have more choice than ever when deciding which partners and products to work with. If you’re designing a vertical product offering, you can easily build in a database from one partner and a Web server from another, for example. Plus, by using Continuous Integration platforms, you can quickly change which languages or frameworks you work in or support, without revamping your entire product.

Like the cloud (a concept that has existed since the days of virtual terminals) or IoT (alsonot a new idea), microservices are not at all a new concept. But they have become crucial to the way software is built and delivered. And they fuel a more flexible channel landscape, in which partner opportunities are not constrained by rigid software design.

Source: TheWHIR

Do We Need a More Open, Private, "Decentralized" Internet?

Do We Need a More Open, Private, "Decentralized" Internet?

varguylogoBrought to you by The VAR Guy

Is it time to rebuild the web? That’s what Tim Berners-Lee and other internet pioneers are now saying in response to concerns about censorship, electronic spying and excessive centralization on the web.

Last week, Berners-Lee, the guy who played a leading role in creating the web in 1989, held a conference with other computer scientists in San Francisco at the Decentralized Web Summit. Attendees also included the likes of Mitchell Baker, head of Mozilla, and Brewster Kahle of the Internet Archive.

Their discussions centered around making the web “open, secure and free of censorship by distributing data, processing, and hosting across millions of computers around the world, with no centralized control,” according to the conference site.

They’re not alone. The organization Redecentralize has been working toward similar goals since last year. The Electronic Frontier Foundation‘s work for digital privacy and openness goes back even further. And BlueYard, a VC firm, hosted a similar event about web openness in Berlin just a few days before the Decentralized Web Summit.

It seems clear that there is sustained and growing interest in online freedom and privacy.

How the Web Became Less Open

What are all these people worried about? What’s wrong with the web today as they see it?

Mostly, they don’t like the tendency of online data to be stored on and routed through centralized servers. These are certainly problems for anyone who believes information online should be free and private.

They’re also not happy about online censorship, which means governments or other authorities block certain websites within their jurisdictions. And they don’t like governments spying on web users in the way the Snowden revelations highlighted.

READ MORE: Government Blunder Exposes Snowden as Target in 2013 Lavabit Email Case

How to Make the Web Open Again

Arguably, however, these are not problems that can be solved by rearchitecting the web alone.

The main issue with online privacy and freedom isn’t that the design of the web — let alone the internet as a whole — is fundamentally flawed. Instead, it’s that most online consumers services have been designed in ways that centralize information and place privacy controls in the hands of vendors, not users.

There are lots of effective ways to protect your privacy online. You can browse using Tor or a VPN to hide your identity and view censored websites. You can use tools like HTTPS Everywhere to add another layer of data encryption. You can avoid placing data in clouds whose servers you don’t control. You could run your own email server if you really wanted. You can center your online activity around platforms like Usenet, which remains as free and decentralized now as it was decades ago, before the web appeared and made Usenet forums an afterthought for most internet users.

SEE ALSO: IANA Transition Proposal Gets NTIA Stamp of Approval

But most ordinary consumers don’t do these things. They have not heard of Tor. They might use a VPN for work, but probably don’t understand how that’s different from using a VPN for privacy reasons. They upload private data to proprietary clouds without hesitation. They use proprietary protocols, like Skype, for online communication even though they have no way of verifying that their data remains as secure and private as the service providers claim.

Changing Consumers

In other words, what needs to change is web user behavior, not the technology itself.

Yes, there might be changes programmers could make to the way the web works that would make it inherently more private and harder to censor. But the tools for beating censorship and assuring privacy already exist. The challenge is just to make them easy enough that ordinary people will actually use them.

SEE ALSO: FBI Subpoenas Tor Developer to Testify in Criminal Hacking Investigation

And that presents a huge opportunity for service providers. As user interest in online openness and privacy increases, organizations that prioritize these features and make them easy for non-geeks to implement will thrive.

Original article appeared here: Do We Need a More Open, Private, “Decentralized” Internet?

Source: TheWHIR

Container and Microservices Myths: The Red Hat Perspective

Container and Microservices Myths: The Red Hat Perspective

Brought to you by The VAR Guy

What are containers and microservices? What are they not? These are questions that Lars Herrmann, general manager of Integrated Solutions Business Unit at Red Hat, answered recently for The VAR Guy in comments about popular container misconceptions and myths.

It’s no secret that containers have fast become one of the hottest new trends in computing. But like cloud computing or traditional virtualization before them, containers do not live up to the hype in all respects. In order to leverage container technology effectively, organizations need to understand the history behind containers, their limitations and where they fit in to the data center landscape alongside virtual machines.

SEE ALSO: Microsoft Launches Azure Container Service

The discussion of container misconceptions below is a condensed version of commentary delivered by Herrmann to The VAR Guy.

Misconception #1: Containers are New

Container packaging as we use it today is new (highlighted by the Docker/OCI image format), as is the concept of using container orchestration like Kubernetes to scale workloads across clusters of hosts. But the idea of sharing an operating system instance in isolating different parts of an application is not. From Unix Chroot to FreeBSD jail to Sun Microsystems’ Solaris Zones, solutions have been available for splitting up and dedicating system resources for some time now.

It’s also important to note that many of the technologies inherent to Linux containers (namespaces, cgroups, etc.) have been the foundation of many first generation PaaS offerings. What’s new is the ability to leverage the container capabilities of Linux to run and manage a very broad set of applications, ranging from cloud-native microservices to existing, traditional applications.

Misconception #2: Containers are Completely Self-Contained Entities

Despite their name, containers are not completely self-contained. Each container “guest” system leverages the same host OS and its services. This reduces overhead and improves performance, but may introduce potential security or interoperability issues.

Misconception #3: Containers can Replace Virtual Machines

Containers won’t replace virtual machines wholesale because they don’t work exactly like virtual machines. Each has its place in the enterprise, and companies must figure out which makes sense for what workloads. In short, virtualization provides flexibility by abstraction from hardware, while containers provide speed and agility through lightweight application packaging and isolation.

So, instead of thinking of containers as replacing virtual machines, companies should be thinking about containers as a complement to virtual machines — with the workload and infrastructure needs determining what to use when.

Misconception #4: Containers are Universally Portable

Containers depend on the host OS kernel and services to function, with “depend” being the operative word. Containers also must cross physical hardware, hypervisors, private clouds, public clouds, and more. Indeed, for containers to be truly portable, developers must have in place an integrated application delivery platform built on open standards.

As with so many things, standards are key — across the entire ecosystem.

Misconception #5: Containers are Secure by Default

There are many benefits to running containers in the enterprise, but those benefits must be weighed against the risk that can arise with the technology. Think about two physical machines — you can isolate them on the network. If one goes down and/or is infected with a virus, the other machine can be pretty easily defended. In a containerized environment, on the other hand, the OS kernel on the host system is being used by all of the containers. This kind of sharing brings with it inherent risk.

The level of isolation provided by the Linux kernel is combining process isolation with namespaces which works very well, but by design doesn’t close out all potential paths malicious code could take to break out and gain access to the host or other containers. That’s why technologies such as SELinux provide a needed additional layer of policy and access control.

What is most important, though, is what’s running inside the container. Industry best practices such as relying on trusted components obtained from trusted sources, complemented with scanning capabilities to “trust but verify” enterprise applications, apply to containers as well. The immutable nature of containers creates an opportunity to manage changes at the image itself, not the running instance. So the container distribution architecture, often implemented as federated registries, becomes a critical element in managing the security and patching of containers.

Original article appeared here: Container and Microservices Myths: The Red Hat Perspective

Source: TheWHIR