2017 Holiday Gift Guide: 35 Tech Gifts for Every Budget

2017 Holiday Gift Guide: 35 Tech Gifts for Every Budget

Happy Holidays everyone! It’s that fateful time of year when snow comes around (even in Texas) and we scramble to check off our holiday lists. If you’re like me, you may be stuck on what to gift your favorite people.

This challenge becomes even more of a head-scratcher if you’re a techie and everything new coming out looks so cool.

Before we get into this year’s awesome list, a couple of points to remember:

  1. Creativity goes a long way. It doesn’t have to be large and shiny for your friends and family to enjoy. In fact, the more thought you put into it, the more valuable your present will become. There are so many cool, inexpensive, ideas out there for you to get creative. Let me give you an example: Star Wars gadgets and gifts. There’s a fun Star Wars BB-8 USB Car Charger, or maybe an awesome Millennium Falcon Star Wars Lighting Gadget Lamp. Both are under $50 and are super fun. If you’re a slightly bigger Star Wars spender, definitely check out the Sphero Star Wars BB-8 App Controlled Robot with Star Wars Force Band. The fun doesn’t have to end there, there are other super fun ideas around popular shows like Stranger Things and even Doctor Who (check out this Doctor Who TARDIS Bottle Opener with Sound FX Effects).
  2. Before you get them a gadget, exercise some caution. I’ve mentioned this before, but it’s an important yearly reminder. Just because you think it’s awesome and amazing doesn’t mean your friends or relatives will like the gift. Put yourself in their shoes. Will they use the device? Will they enjoy it? Is it really a great idea to get your grandparents an Amazon Echo? My best piece of advice when getting a techie gift for someone is to spend a second thinking from their perspective. This will let you get a better idea to what they want and how they’ll use the gift. Remember, creativity goes further than just getting them a ‘thing.’ Finally, there are some gifts you should either avoid or be ready to have them re-gifted. Before you get anyone another Bluetooth speaker, a battery pack, or a light-up fidget spinner, see if they don’t already have five of them. A lot of these kinds of presents have really become the ‘scented candle’ of techie gifts.

Finally, if you’re curious to see our list for amazing gifts for the previous two years – many of which are still very applicable – you can see them here:

And now – for our holiday gift list!

  1. Home automation is still really cool, and a lot less complicated (and less expensive!) Let’s start with all the cool things Amazon has been doing. It started off with the Echo, and now have an entire line of fun products. I’ve already worked with the original Echo, the new updated Echo, the old and new Echo Dot, and I’ve really enjoyed the new Echo Show. Beyond that, there are updated Fire Tablets, the Amazon Cloud Camera, and the new Echo Spot (think of that as a mini Echo Show). I’ve honestly been enjoying all of these as great devices to introduce greater levels of home automation. When it comes to gift ideas, you don’t have to splurge. The new Echo Dot is on sale for a limited time (40 percent off) and you can even bundle in a few other smart home gadgets like a Bose speaker, the Logitech Harmony Hub, and a Sonos Play:5 system. One of the coolest bundles that I really enjoy is the one with the Philips Hue. At less than $80, you can get a smart home Echo Dot and pair it with a very fun Philips Hue lighting system. From there, just ask Alexa to change the lights for you.

If you’re looking to go a bit further with home automation, I absolutely recommend browsing the Wink website for lots of great ideas. Wink can act as a centralized hub that will aggregate almost all the smart devices in your home. This can range from everything like your smart lock and Nest thermostats, to your Alexa-enabled devices.

  1. Home security is getting a lot smarter. It feels like everyone is getting into the smart home security business. And, it’s a great business to be in. I’m a big fan of Ring and its security products. I was one of the early adopters of its doorbell and now have the new Video Doorbell Pro. Ring was one of the very first to get into the video doorbell market and continue to make waves. As of now, it has evolved past just doorbells and have Spotlight cameras, Floodlight cameras, and even its own security system. The entire protection security kit starts at $199. From there, there are lots of fun sensors you can add for flooding, freezing, smoke/CO, and more.

Ring isn’t the only one in this game. Nest continues to lead with smart home innovation. I’ve been a huge fan of their cameras, thermostats, and smoke/CO systems. I’ve even pre-ordered a few of the new Nest Cam IQ outdoor cameras. As of right now, Nest has a lot of new offerings to explore if you’re looking for an awesome gift. This includes a new line of thermostats. The new Nest Thermostat E is designed to be a less expensive, yet still powerful, version of the original Nest Learning Thermostat. Similarly, you’ll see new editions of its cameras as well. The new Nest Secure alarm system comes with lots of sensors, is designed to be super easy-to-use, and has some unique gadgets like a Nest Tag – allowing you easily arm and disarm your system quickly. Plus, they’ve come a long way with things like facial recognition, awareness, and alerting. Otherwise, solutions from folks like Alarm.com and SimpliSafe also make great gifts by making home security even easier.

  1. Something for the baby: So many people in my life are having kids, and lots of my millennial friends are also looking to leverage some smart baby tools to make parenting just a tiny bit easier. To help out, there are lots of fun baby-related techie gift ideas. First of all, check out this Smart Infrared Ear Thermometer with Touchable LED Display. Priced at just under $60, it’s non-invasive, accurate, and super-fast.

Another really cool give idea is the technology from Owlet. The Owlet Smart Sock tracks your infant’s heart rate and oxygen while they sleep. It’s not the least expensive gift out there, but’s pretty cool. For those connected parents out there, this little sock will silently track an infant’s heart rate and oxygen while they sleep and notifies parents if something is wrong.

Finally, we come to the baby monitor. Invented in 1937, and reinvented (according to Nanit) in 2017. We’ve all heard of traditional baby monitors being hacked; where intruders not only see the kid, but can actually talk to the child as well. Nanit takes the approach to baby monitoring completely differently. According to Nanit, the device was designed to live next to your newborn. From the stable camera stand and wall mount to our infrared light, every hardware detail has been crafted with safety in mind. And Nanit’s data is secured with end-to-end encryption. With safe cable management and as shatter-resistant lens included, this is definitely the next-generation in baby monitors.

  1. Something for the pets: Where would we be without our beloved pets? If you have a puppy or a kitty in your life that deserves an awesome gift you have to check these out. First is the Pawbo Life Wi-Fi Pet Camera. This is fun little gadget comes with 720p HD video, 2-way audio, video recording, a treat dispenser, and a laser game. If you or your friends have a cat at home – the manual laser game that you can control from your phone is so much fun.

Next is the Furbo Dog Camera. This is a treat-tossing, full HD Wifi pet camera and 2-way audio gadget. Designed for dogs, Furbo uses dog recognition technology to send you dog activity alerts, person alerts, and even a dog selfie alert! Oh, and guess what – it works with Amazon Alexa.

This next gift idea isn’t entirely cheap. But, when it comes to spoiling a loved pet – this is definitely a posh gift! The PetChatz HD & PawCall Bundle is a two-way audio/video pet treat camera with DogTV. It also includes brain games, recording capabilities, aromatherapy scents, motion & sound detection, and even a call mode. One of the really cool features here is the included PawCall function. This is basically an interactive brain game with your pup where your dog can actually contact you. No, I’m not kidding.

If you’re looking for a bit more of a reasonable stocking stuffer for your pet, check out the Pooch Selfie: The Original Dog Selfie Stick. This is the first smartphone accessory that actually ups your pup’s selfie game. Priced at a reasonable $12, you basically strap a squeaker ball for maximum attention grabbing to the top of your smartphone. From there, you get the absolute best selfies you will ever take with your pup.

  1. Great gifts under $50: You don’t need to spend too much to get something cool for your friends. There are lots of fun ideas for gifts under $50. For those friends that love to cook – get them a fun Wireless Meat Thermometer. I actually use mine all the time for baking, grilling, and overall cooking. This lets you keep the oven door closed while still getting updates on temperatures. The best part is that there’s an app, so you can keep an eye on your cooking from anywhere in the house.

Another fun idea is a VR system. Yes, you can get the new Oculus Rift+Touch Virtual Reality System for $400, but there are others which are still fun and a lot less expensive. This VR Headset with Bluetooth Remote Controller is priced at $38. This little guy is great for large viewing immersive experiences, looking at 3D movies, and even playing 3D games on your smartphone.

Here’s another fun one to help you calm down over the holidays. The Ocean Wave Night Light creates a relaxing, comforting, soothing, and brilliant light show for viewing entertainment and pleasure. At $25, this is an awesome gift to create ambience and a relaxing atmosphere for your friends.

Finally, as I mentioned earlier, check out what Amazon has as far as deals. The new Fire HD 8 Tablet with Alexa is priced at $49 (30 percent off right now) and is a great little device. There is even a kid-proof version of the Fire Tablet.

Whatever you decide to gift, the most important part here is that you have fun, spend time with your friends and family, and really enjoy the holiday season. Remember, if you can’t think of anything of a gift, donating to a charity in someone’s name is always an amazing gift.

Source: TheWHIR

How the World of Connected Things is Changing Cloud Services Delivery

How the World of Connected Things is Changing Cloud Services Delivery

I recently led a session at the latest Software-Defined Enterprise Conference and Expo (SDxE) where we discussed how connected “things” are going to reshape our business, the industry, and our lives. When I asked the people in that full room how many had more than two devices that could connect into the cloud, pretty much every hand went up.

We’re living in a truly interconnected world. One that continues to generate more data, find more ways to give analog systems a digital heartbeat, and one that shapes lives using new technologies.

A recent Cisco Visual Networking Index report indicated that smartphone traffic will exceed PC traffic by 2021. In 2016, PCs accounted for 46 percent of total IP traffic, but by 2021 PCs will account for only 25 percent of traffic. Smartphones will account for 33 percent of total IP traffic in 2021, up from 13 percent in 2016. PC-originated traffic will grow at a CAGR of 10 percent, while TVs, tablets, smartphones, and Machine-to- Machine (M2M) modules will have traffic growth rates of 21 percent, 29 percent, 49 percent, and 49 percent, respectively.

Cloud services are accelerated in part by the unprecedented amounts of data being generated by not only people, but also machines and things. And, not just generated, but stored as well. The latest Cisco GCI report estimates that 600 ZB will be generated by all people, machines, and things by 2020, up from 145 ZB generated in 2015. And, by 2020, data center storage installed capacity will grow to 1.8 ZB, up from 382 EB in 2015, nearly a 5-fold growth.

When it comes to IoT, there’s no slowdown in adoption. Cisco’s report indicates that within the enterprise segment, database/analytics and IoT will be the fastest growing applications, with 22 percent CAGR from 2015 to 2020, or 2.7-fold growth. Growth in machine-to-machine connections and applications is also driving new data analytics needs. When it comes to connected things, we have to remember that IoT applications have very different characteristics. In some cases, application analytics and management can occur at the edge device level whereas for others it is more appropriately handled centrally, typically hosted in the cloud.

Cloud will evolve to support the influx of connected things

It’s not just how many new things are connected into the cloud. We must also remember the data that these devices or services are creating. Cloud services are already adapting to support a much more distributed organization; with better capabilities to support users, their devices, and their most critical services. Consider the following.

  • The edge is growing and helping IoT initiatives. A recent MarketsAndMarkets report indicated that CDN vendors help the organizations in efficiently delivering content to their end users with better QoE and QoS. CDN also facilitates organizations to store their contents near its target users and get it secured from the attacks like DDoS. The report indicated the sheer size of the potential market, where this segment is expected to grow from $4.95 billion in 2015 to $15.73 billion in 2020. With more data being created, organizations are working hard to find ways they can deliver this information to their consumers and users. This shift in data consumption has changed the way we utilize data center technologies and delivery critical data points.
  • Cloud is elastic – but we have to understand our use-cases and adopt them properly. There are so many powerful use-cases and technologies we can leverage within the cloud already. WANOP, SD-WAN, CDNs, hybrid cloud, and other solutions are allowing us to connect faster and leverage our devices efficiently. When working with end-users, make sure you know which type of services you require. For example, in some cases, you might need a hyperscale data center platform rather than a public cloud. In some cases, you still need granular control over the proximity of data to the remote application. This is where you need to decide between public cloud options and those of hyperscale providers. There’s no right or wrong here – just the right use-case for the right service.
  • Security will continue to remain a top concern. Recent research from Juniper suggests that the rapid digitization of consumers’ lives and enterprise records will increase the cost of data breaches to $2.1 trillion globally by 2019, increasing to almost four times the estimated cost of breaches in 2015. That’s trillion with a ‘t’. The report goes on to say that the average cost of a data breach in 2020 will exceed $150 million by 2020, as more business infrastructure gets connected. Remember, the data that we create isn’t benign. In fact, it’s very valuable to us, businesses, and the bad guys. Ensuring device and data security best practices will help you protect your brand and keep user confidence high. Depending on the device and your industry – make sure to really plan out your device and data security ecosystem. And, it’ll always be important to ensure your security plan is agile and can adapt to a constantly evolving digital market.

Powerful cloud services are become a major part of our connected society. We’ve come to rely on things like file sharing, application access, and connected physical devices, and much more to help us through our daily lives and in the business world. The goal of cloud services will be to enable these types of connections in a transparent manner.

Cloud services will continue to evolve to support data and device requirements. The goal of the organization (and the user) will be to ensure you’re using the right types of services. With that, keep an eye on the edge – it’ll continue to shape the way we leverage cloud, connected devices, and the data we create.

Source: TheWHIR

Real-World User Cloud Challenges and How to Overcome Them

Real-World User Cloud Challenges and How to Overcome Them

Today’s modern organization is in the midst of a digital revolution. Cloud services are becoming much more mature, users are utilizing more digital tools to stay productive, and organizations are constantly tasked with keeping up with demand. Accenture’s research model and analysis shows that digital is now dominating every sector of the economy. In fact, this global digital economy accounted for 22 percent of the world’s economy in 2015. And it’s rapidly growing, as we forecast those numbers to increase to 25 percent by 2020, up from 15 percent in 2005.

Accenture goes on to say that those organizations which embrace these digital trends will come out on top in today’s very competitive market. Winners will create corporate cultures where technology empowers people to evolve, adapt, and drive change.

There’s a key operating word in that previous sentence – people. People are the consumers of these digital tools and are the ones who use them to be productive and impact the modern business. So, with this in mind, what sort of issues are users experiencing when utilizing virtual workloads and cloud services?

Eventually, it all comes down to the end-user and their digital experience. A given project can have the most advanced systems in place, but if the end-user experience is poor the project might be considered a failure. With more devices, applications and data to work with, the inherent challenge has become managing this end-user digital environment.

We’re no longer managing just the end-user; rather, we are attempting to control their entire experience. The biggest challenge facing IT now is the amount of settings that a user carries with them at any given time. As opposed to just a few years ago, users have significantly more settings and personalization to work with than before. When we say settings we mean the entire line of what the user might be working with to stay productive:

  • Physical peripheral settings
  • Folder/content/data redirection
  • Data availability (file sharing)
  • Personalization settings (specific to devices, applications, desktops, and more)
  • Profile settings
  • Application settings (virtual, cloud-based, locally installed)
  • Hardware and device settings (personal vs corporate devices)
  • Desktop settings (virtual, physical, hosted)
  • Performance settings (optimization)
  • And much more

Now, how can the IT department control all of this? How can management create a truly robust user experience? The challenge is not only to manage the above end-user requirements, but also to continuously provide a positive and productive end-user experience.

A great example would be a customer with numerous locations. By having visibility into the end-user, their policy and their settings, administrators can control what they see and how they access the environment. One of the biggest complaints is the constantly varying experience of users coming in from remote locations using various devices. By centralizing the user management process, admins can deliver the same experience regardless of the location, OS or hardware device.

Designing an IT Architecture Which Supports Digital Users

More applications, more data, more end-user devices, and more operations being done on a computing platform all mean more help-desk calls. One of the biggest shifts, however, has been where these calls are coming from. Originally, many calls could be fielded and resolved at the data center level. Now, more users are experiencing issues related to their experience rather than the application or platform that they’re operating on. For example, profile corruption or missing settings are extremely common calls. Similarly, users are requesting cloud services which often times result in fragmented delivery architecture.

We now see users within healthcare, law, and other large verticals using more and more devices. Because of this, we began so see a new issue arise: User Fragmentation. This means broken profiles, improperly aligned settings, and missing application components because of the number of varying devices.

Why is this happening? As mentioned earlier, the availability of more devices on the network creates for greater diversity in how data and information is being handled. Unfortunately, without a good control platform, IT is stuck with fielding more and more help-desk calls. Users are demanding that more of their devices share the same experience as the corporate desktop. Even in that scenario, the corporate desktop has to deal with even more information as data usage increases. The biggest challenge facing IT now is how to control the user layer and still facilitate the growth in the number of devices being utilized.

Overcoming the Digital Dilemma

Instead of just deploying a piece of technology – make sure to design your solutions around your users. Know the devices they’re deploying, know the resources they’re accessing, and always think about their experience. New tools and technologies help control the entire digital experience between physical devices and virtual resources. Most of all these new systems help create a bridge between on-premise resources and those sitting in the cloud.

For example, new types of portals can present services, applications (on premise, cloud, SaaS), virtual desktops, and much more from a single location. These portals act as service stores where users can request everything from printers to headsets. Furthermore, they can request specific cloud and business resources like applications, desktops, and much more. Ultimately, this removes barriers to adoption by reducing complexity. In designing a digital-ready end-user ecosystem; the rule of thumb is actually pretty straightforward: Start your planning with your end-users. You’ll not only learn even more about your own business; but you’ll also learn how users stay productive, the devices they’re using, and how to proactively keep these users happy.

Source: TheWHIR

How to Create a Business Resiliency Strategy Using Data Center Partners and Cloud

How to Create a Business Resiliency Strategy Using Data Center Partners and Cloud

Increased dependency on the data center also means that overall outages and downtime are much costlier as well. According to a new study by Ponemon Institute and Emerson Network Power, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change).

Throughout their research of 63 data center environments, the study found that:

  • Downtime costs for the most data center-dependent businesses are rising faster than average.
  • Maximum downtime costs increased 32 percent since 2013 and 81 percent since 2010.
  • Maximum downtime costs for 2016 are $2,409,991.

Now, with this in mind let’s start with two important points.

  1. What is Business Resiliency? Business resiliency and its associated services specifically revolve around the protection of your data. A proactive resiliency program would include HA, security, backup, and anything that impacts the confidentiality of data or compromises compliance. So, the idea becomes to unify the entire resiliency strategy plan to include all aspects of data protection.
  2. What is the business objective? Creating a proactive approach where a business can handle disruptions while still maintaining true resilience.

Today organizations are leveraging data center providers and cloud services for their business resiliency and continuity planning. To create a good plan there are several key steps that must be understood. Remember, business resiliency means protecting the entire business. Even if some specific units don’t need to be brought back immediately there has to be a good plan around it all. In creating a resiliency strategy start with these two concepts:

  • Use your BIA. Easily one of the most important steps in designing a resiliency plan and something that’s helps you better understand your business. These documents outline specific functions for each business unit, application, workload, user, and much more. Most of all, it helps outline which applications/workloads are critical and how quickly they need to be brought back up. By having such a document, an organization can eliminate numerous variables in selecting a partner which is capable of meeting the company’s business resiliency needs. Furthermore, you can align specific resiliency services to your applications, users, and entire business units.
  • Understand Business Resiliency Components and Data Center Offerings. Prior to moving to a data center partner or cloud provider, establish your resiliency requirements, recovery time and future objectives. Much of this can be accomplished with a BIA, but planning for the future will involve conversations with the IT team and other business stakeholders. Once those needs are established, it’s important to communicate them to the hosting provider. The idea here is to align the thought process to ensure a streamlined DR environment.

Incorporating Resiliency Management Tools and Services

One of the most important management aspects within any environment is the ability to have clear visibility into your data center ecosystem. This means using not only native tools, but ones provided by the data center partner. Working with management and monitoring tools to create business resiliency is very important. It’s also important to have a good view into the physical infrastructure of the data center environment.

  • Uptime, Status Reports, and Logs. Having an aggregate report will help administrators better understand how their environment is performing. Furthermore, managers can make efficiency modifications based on the status reports provided by a data center’s reporting system. Furthermore, working with a good log management system is absolutely critical. This not only creates an effective paper trail, it also helps with infrastructure efficiencies. A good log monitoring system is one of the first steps in designing a proactive, rather than a reactive environment. Many times logs will show an issue arising before it becomes a major problem. Administrators can act on those log alerts and resolve problems at a much steadier pace rather than reacting to an emergency.
  • Mitigating Risk and Protecting Data. It’s important to work with a comprehensive suite of highly standardized and customizable-tiered offerings which can support all levels of business requirements. Good data center partners can deliver a broad spectrum of resiliency solutions which range from multi-data center designs to cost-effective cloud-enabled DRaaS. Furthermore, you can use professional services to assess, design, and implement disaster recovery environments, as well as managed services to help ensure business continuity in the event of a disruption. Partner offerings can be tailored to specific customer needs; and remain flexible, agile and scalable to continue to meet evolving requirements.
  • Meeting Regulations and Staying Compliant. Data center partners can provide structured DR and security methodologies, processes, procedures, and operating models on which to build your resiliency programs. Leading data center models are founded on industry best-practices, methodologies, and frameworks including Lean Six Sigma, ITIL V3, ISO 27001, ISO 22301, and BS25999. In fact, data center partner consultants can help organizations meet FISMA, HIPPA, FFIEC, FDIC, PCI, and SOX compliance requirements. Furthermore, a partner’s DR audit and testing solutions helps organizations to meet corporate and regulatory audit requirements by demonstrating maturity of a business resiliency program.

The process of selecting the right data center partner should include planning around contract creation and ensuring that the required management and business resiliency tools are in place. Remember, your data center is an important extension of your organizations and must be properly managed and protected. Good data center providers will oftentimes offer tools for direct visibility into an infrastructure. This means engineers will have a clear understanding of how their racks are being utilized, powered and monitored. These types of decisions make an infrastructure more efficient and much easier to manage in the long-term.

Source: TheWHIR

Creating Cloud DR? Know What's in Your SLA

Creating Cloud DR? Know What's in Your SLA

So many organizations are turning to cloud for specific services, applications, and new kinds of business economics. We’re seeing more deploying into cloud and a lot more maturity around specific kinds of cloud services.

Consider this, according to Cisco, global cloud traffic crossed the zettabyte threshold in 2014, and by 2019, more than four-fifths of all data center traffic will be based in the cloud. Cloud traffic will represent 83 percent of total data center traffic by 2019. Significant promoters of cloud traffic growth include the rapid adoption of and migration to cloud architectures and the ability of cloud data centers to handle significantly higher traffic loads. Cloud data centers support increased virtualization, standardization, and automation. These factors lead to better performance as well as higher capacity and throughput.

One really great use-case is using cloud for disaster recovery (DR), backup, and resiliency purposes. And, with this topic in mind, one of the most important things to develop when deploying a DR environment with a third-party host is the SLA. This is where an organization can define very specific terms as far as hardware replacement, managed services, response time and more. Remote, cloud-based data centers, just like localized ones, need to be monitored and managed. When working with a third-party provider, host or colo, make sure specific boundaries are set and clearly understood as far as who is managing what.

Leverage provider flexibility. Hosting providers have the capability of being very flexible. They can setup a contract stating that they will only manage the hardware components of a rented rack. Everything from the hypervisor and beyond, in that case, becomes the responsibility of the customer. Even in these cases, it’s important to know if an outage has occurred or if there are failed components. Basically, the goal is to maintain constant communication with the remote environment. Administrators must know what is happening on the underlying hardware even if they are not directly responsible for it. Any impact on physical DR resources can have major repercussions on any workload running on top of that hardware.

Similarly, there are new cloud services which can take over the entire DRBC function and even have failover sites ready as needed. Remember, critical workloads and higher the uptime requirements will need to have special SLA provisions and cost considerations.

  • Define business recovery requirements. When developing an SLA for a cloud or hosting datacenter, it’s important to clearly define the recovery time objective – that is, how long will components be down? Some organizations require that they maintain 99.9 percent uptime with many of their critical components. In these situations, it’s very important to ensure proper redundancies are in place to allow for failed components. This can all be built into an SLA and monitored on the backend with good tools which have visibility into the DR environment. Let me give you a specific example. If you’re leveraging Microsoft’s Cool vs Hot storage tiers – there are some uptime considerations. Microsoft highlighted that you will be able to choose between Hot and Cool access tiers to store object data based on its access pattern. However, the Cool tier offers 99 percent availability, while the Hot tier offers 99.9 percent.

So, you absolutely need to design around your own DR and continuity requirements. If an organization has a recovery objective of 0-4 hours, it’s acceptable to have some downtime, but not long. With this type of DR setup, an SLA will still be setup with clear responsibilities being segregated between the provider and the customer. Having an open level of communication and clear environmental visibility will save a lot of time and effort should an emergency situation occur.

  • Plan, train, and prepare for the future. In a DR moment, everyone needs to know what they are supposed to do in order to bring their environment back up quickly. This must be clearly defined in your runbook, especially if you’re leveraging DR and business continuity services from a host or cloud provider. Most of all, when creating SLAs, make sure you plan for bursts, and what your environment will require in the near future. Restructuring SLAs and hosting contracts can be pricey – especially for critical DR systems. This means planning will be absolutely critical.

Cloud computing and the various services it provides will continue to impact organizations of all sizes. Organizations are reducing their data center footprints while still leveraging powerful services which positively impact users and the business. Using cloud for DR and business continuity is a great idea when it’s designed properly. Today, cloud services are no longer for major organizations. Mid-market and SMBs are absolutely leveraging the power of the resilient cloud. Moving forward, cloud will continue to impact organizations as they transition into a more digital world. And, having a good partnership (and SLA) with your cloud provider helps support a growing business, and an evolving user.

Source: TheWHIR

Planning for Cloud Backup: Working with the Right Provider

Planning for Cloud Backup: Working with the Right Provider

This post is the second part of a two-part series. Click here to read part one, Planning for Cloud Backup: Best Practices and Considerations.

The pace of cloud is pretty blistering. We’re seeing new services and offerings diversifying the competitive landscape and giving organizations and users many new options. In fact, Gartner recently pointed out that the worldwide public cloud services market is projected to grow 16.5 percent in 2016 to total $204 billion, up from $175 billion in 2015. That’s a lot of growth.

In our previous post – we discussed a few considerations and best practices when it comes to cloud backup solutions. Now, we discuss working with the right kind of provider. Please remember, it’s not always about cost. Rather, your provider must align with your business and your strategy. This means providing services which are easy to use, compatible with your systems, and are easy to manage. The last thing any organization wants is to experience outages due to poor cloud partner integration.

READ MORE: Understanding Cloud Backup and Disaster Recovery

Consider this, Ponemon Institute and Emerson Network Power have just released the results of the latest Cost of Data Center Outages study. Previously published in 2010 and 2013, the purpose of this third study is to continue to analyze the cost behavior of unplanned data center outages. According to the new study, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 today (or a 38 percent net change). With this in mind – working with a good cloud backup provider not only helps mitigate this outage risk and associated costs; it also allows you to be a lot more flexible with our cloud-based data.

And so, when it comes to cloud backup and planning, it’s important to know and understand which product and vendor to work with. Remember, every environment is unique so the requirements of each organization will certainly differ. Still, there are some important considerations which must be made:

  • System compatibility. When working with a cloud backup solution, it’s important to take the time and verify that all systems being backed up are compatible. For example, if snapshots are being taken of a virtual environment – can those snapshots be then used to revert or recover the VM? Can they scale into other cloud systems or even an on premise data center? Will the snapshot only take an image of the VM’s storage and nothing else? Using the same concept, we can apply compatibility with other systems within an organization as well – databases, file servers, applications, and others as well. During the planning process, IT teams will need to work with the cloud backup vendor to ensure that their systems are compatible and are capable of being backed up to the functionality desired. Remember, the goal isn’t only to back up the data. One of the biggest benefits of a modern cloud-based backup solution is the fast turnaround of data recovery. So, administrators must be sure that their data is not only backed up, but is capable of quick and effective recovery.
  • Ease of use and training. When working with a cloud backup solution, it’s important to ensure a relative ease of use for the product. Administrators must be able to perform daily tasks to make sure that their data is being backed up safely and normally. There will be times when training is involved to further entrench the technologies within the organization. This is necessary for a smooth backup process since data backup and recovery are vital parts of an IT environment.
  • Management tools and feature considerations. Depending on the cloud platform chosen, there will be many feature considerations involved with the product. As mentioned earlier, data deduplication/compression, encryption, compliance support, VM compatibility, and archiving are just a few examples. Others may include direct virtual environment integration, or even cloud-ready disk-based backup solutions. When selecting the right technology and vendor, it’s important for the organization to establish the business case and need for a given product and its features. Once that is established, the other important task is to familiarize the management tool set. Although native tools offered within the product are powerful, there may be a need for 3rd party offerings as well. Since cloud is a distributed ecosystem, it’s important to consider multi-site deployments. When working with numerous sites and environments, management tools can go a long way in ensuring that data is being backed up normally and efficiently. That means proper data usage, minimal waste within resources, and direct visibility into the cloud backup environment.

Whenever cloud and backup solutions are in the discussion, all infrastructure components which may be affected by the deployment must be considered. Your backup and cloud partner must understand this and be a part of the process. This means compatibility, understanding of the technology and how a backup routine may affect other parts of the infrastructure. There will be times when a backup job may require higher amounts of bandwidth or an appropriate store size – these planning points must be addressed prior to any major rollout. Remember, depending on the environment, there may be options for a pilot or POC. A small-scale rollout of a cloud backup solution may show where some weaknesses can be quickly resolved prior to a large deployment. A good cloud partner can always help there as well.

Source: TheWHIR

Embracing Cloud: How Cloud Services Impact All Verticals and Industries

Embracing Cloud: How Cloud Services Impact All Verticals and Industries

Let’s talk cloud for a minute. First of all – the cloud is everywhere. And, beyond the general term – we’re seeing so much evolution around cloud services as well. Recently, Gartner pointed out that the worldwide public cloud services market is projected to grow 16.5 percent in 2016 to total $204 billion, up from $175 billion in 2015. The highest growth will come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 38.4 percent in 2016. Cloud advertising, the largest segment of the global cloud services market, is expected to grow 13.6 percent in 2016 to reach $90.3 billion.

“The market for public cloud services is continuing to demonstrate high rates of growth across all markets and Gartner expects this to continue through 2017,” said Sid Nag, research director at Gartner. “This strong growth continues reflect a shift away from legacy IT services to cloud-based services, due to increased trend of organizations pursuing a digital business strategy.”

However, it’s one thing to sit on the sidelines and watch this cloud revolution unfold. It’s a whole different story when you jump on the train ride. This is why it’s so critical for companies to actually embrace cloud technologies and services.

SEE ALSO: Five Security Features That Your Next-Gen Cloud Must Have

At a recent conference in Toronto, Tiffani Bova, former Vice President & Sales Strategies analyst at Gartner said the cloud market is transforming and it is up to companies to use this to advantage. She went on to point out that by 2017, 75 percent of IT organizations will have a bimodal capability — and half will make a mess trying to maintain this balance. As companies adopt cloud computing, they will face an issue — how to maintain operations and “keep the lights on while trying to innovate.”

Here’s the big point Bova made: Digital business incompetence will cause a quarter of organizations to lose their market position by 2017. The former Gartner analyst believes every company in the world is in “some way” an IT company – but while firms work on expanding cloud products and services, they need to remember the focus is on business. However, Bova thinks many within the enterprise are destined to make a mess of this, and will lose their market positions in the next few years as a result.

With all of this in mind, it’s critical to see that the emergence of the cloud has helped many organizations expand beyond their current physical data center. New types of cloud-based technologies allow IT environments to truly consolidate and grow their infrastructure quickly, and, more importantly affordably.

Before the cloud, many companies looking to expand upon their current environment would have to buy new space, new hardware and deploy workloads based on a set infrastructure. Now that WAN connectivity has greatly improved, cloud-based offerings are much more attractive.

Consider these cloud computing points:

  • Creating next-generation data distribution and scale. Massive data centers can be distributed both locally and around cloud-based environments which administrators can access and manage at any time. These environments are scalable, agile and can meet the needs of a small or very large enterprise.
  • Cloud comes in many flavors – pick what your business needs and consume. Cloud technologies come in three major offerings: Private, Pubic and Hybrid. The beauty of the cloud is that an organization can deploy any one of these solutions depending directly on their business goals.
    • Private clouds are great solutions for organizations looking to keep their hardware locally managed. A good example here would be application virtualization technologies such as Citrix XenApp. Users have access to these applications both internally and externally from any device, anytime and anywhere. Still, these workloads are privately managed by the organization and delivered over the WAN down to the end-user.
    • Public clouds are perfect for organizations looking to expand their testing or development environment. Many companies simply don’t want to pay for equipment that will only be used temporarily. This is where the “pay-as-you-go” model really works out well. IT administrators are able to provision cloud-ready resources as they need them to deploy test servers or even create a DR site directly in the cloud.
    • In a hybrid cloud, a company can still leverage third party cloud providers in either a full or partial manner. This increases the flexibility of computing. The hybrid cloud environment is also capable of providing on-demand, externally-provisioned scalability. Augmenting a traditional private cloud with the resources of a public cloud can be used to manage any unexpected surges in workload.

Remember, even though “cloud” isn’t anything new – the many cloud services being deployed today might be. Cold storage, big data, BI and data analytics are all different kinds of services which can now be cloud-born.

READ MORE: Why Moving to Cloud Makes Sense for Mid-Market and SMB Organizations

With that in mind, there are a couple of key considerations to take in while working with your own cloud environment:

  1. It will always be very important to monitor resources both at the local and cloud-based data center. Remember, resources are finite so managing how much storage is allocated to a VM, or how much RAM is given to a host will always be important.
  2. Also, make sure to monitor WAN links between sites. Simple connectivity issues can have very bad results on workload delivery. Plan for usage spikes and always try to have a DR component built into a heavily used production environment.
  3. Cloud is a powerful tool capable of creating great ROI. This will be the case as long as your IT strategy always aligns with your business. Remember, the most successful cloud deployments are those that break down any business barrier and silos and integrate the entire organization.

Embracing the cloud

As you take in this article ask yourself a key question: “How am I using cloud today and is it effective for me?” If you haven’t looked at some kind of cloud service this far into the game – I highly suggest that you do. Whether it’s cold storage or some kind of cloud backup service – there are amazing ways you can further enable your business.

Remember, cloud allows you to innovate at the pace of software and enables the business to stay ever-agile. “We have only touched the surface of what the ecosystem will deliver in the future,” Gartner analyst Tiffani Bova says. “It is up to business leaders to understand the impact market forces will have on sales strategies in the future ecosystem.”

Don’t fall behind the curve. The best piece of advice a cloud champion can give you is to at least test or try out a cloud service. There are great ways to demo these environments which can have little-to-no impact on your production environment. From there, you can seamlessly integrate core data center components with a variety of cloud services. As the digital revolution continues to unfold – cloud can be your vehicle to navigate the cloud landscape.

Source: TheWHIR

Benefits and Trends Around Cloud Adoption and Distributed Infrastructures

Benefits and Trends Around Cloud Adoption and Distributed Infrastructures

It’s no secret that many organizations are leveraging the power of the cloud to help them create more agility and a better business structure. In fact, just earlier this year, Netflix completed its entire move into the cloud. Having shut down its last in-house data center, Netflix finalized the move by transitioning its final back-office services to the Amazon Web Services (AWS) public cloud.

And, these trends aren’t slowing down. According to the new Worldwide Semiannual Public Cloud Services Spending Guide from International Data Corporation (IDC), worldwide spending on public cloud services will grow at a 19.4 percent compound annual growth rate (CAGR) — almost six times the rate of overall IT spending growth – from nearly $70 billion in 2015 to more than $141 billion in 2019. The new spending guide expands on IDC’s previous public cloud services forecasts by offering greater detail on industry and geographic spending levels.

“Over the past several years, the software industry has been shifting to a cloud-first (SaaS) development and deployment model. By 2018, most software vendors will have fully shifted to a SaaS/PaaS code base,” said Frank Gens, Senior Vice President & Chief Analyst at IDC. “This means that many enterprise software customers, as they reach their next major software upgrade decisions, will be offered SaaS as the preferred option. Put together, new solutions born on the cloud and traditional solutions migrating to the cloud will steadily pull more customers and their data to the cloud.”

SEE ALSO: Moving Away from AWS Cloud: Dropbox Isn’t an Anomaly, and Here’s Why

According to IDC’s press release, from a company size perspective, large and very large companies will be the primary driver of worldwide public cloud services with spending of more than $80 billion in 2019. However, small and medium businesses (SMBs) will remain a significant contributor to overall spending with more than 40 percent of the worldwide total throughout the forecast period coming from companies with fewer than 500 employees.

With all of this in mind – let’s start here: Distributed data centers and cloud service providers are not altogether new technologies – however, their widespread adoption over the past few years has certainly been noticed. As opposed to a single, centralized data center managing all known workloads, administrators are now utilizing WAN and cloud technologies to evolve beyond single points of operation.

As technology progresses, so does the data center. Currently, server, switching and storage resources have become less expensive and more attainable by more organizations. This has led to a distributed environment revolution. Organizations are taking advantage of this type environment by decentralizing their infrastructure to allow their data to be more agile and redundant.

Cloud-based data centers are an example of a distributed environment where a single organization can have multiple points of live data. Furthermore, one of the most dominant forms of cloud is still the hybrid architecture. Findings from a recent Gartner report say that 2016 will be the defining year for cloud as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.

So – what are the benefits behind adopting this type of environment? Well, there are actually a few:

  • IT administrators are able to create hot DR sites and replicate their vital data over a dedicated WAN link.
  • Extra resources are just a few mouse-clicks away with the marriage of virtualization and cloud technologies. Engineers can spin up new VMs as needed to handle additional user loads.
  • Large, bulky environments can be consolidated into smaller, more efficient datacenters where a single point of failure becomes a problem of the past.

Still, with distributed environments, there are challenges that must be addressed. Since every environment is truly unique, each infrastructure may have their own set of design questions to answer.

  • Remember, when looking into a distributed environment, WAN link considerations must be made. In these situations, evaluate the type of load that’s going to be pushed down the pipe. Prior to deploying any production systems, relevant workload stress testing must occur to fully grasp the type of bandwidth requirements needed.
  • Security will also be a challenge. In a distributed environment, access to the data and workloads must be carefully managed and monitored. Proper security best practices should always be applied to a given workload.
  • Finally, resource management can sometimes be a difficult task to monitor as well. With distributed environments, an organization may potentially have several datacenter points. Each datacenter will have its own set of resources. Having a solid monitoring system will ensure that these resources are properly used and allocated as needed.

Maybe you’re not the size of a Netflix – yet. But that doesn’t mean that you can’t begin to leverage the power of the cloud. There are so many great servers to explore that almost every business across any vertical can find some type of benefit. When working with cloud, always take the time to plan out your deployments. Creating the right type of contracts and proper SLAs will help ensure that you maintain costs and keep the cloud environment aligned with the business.

Source: TheWHIR

The Evolution of White Box Gear, Open Compute and the Service Provider

The Evolution of White Box Gear, Open Compute and the Service Provider

There is a lot changing within the modern cloud and service provider world. Organizations are seeing the direct benefits of moving towards the cloud and are now positioning their spending cycles to create budgets to move their environment into the cloud. Trends around application delivery, data control, resource utilization, and even end-user performance are all driving more users to use cloud and service providers.

Consider this, according to Gartner, the worldwide public cloud services market is projected to grow 16.5 percent in 2016 to total $204 billion, up from $175 billion in 2015, according to Gartner, Inc. The highest growth will come from cloud system infrastructure services (infrastructure as a service [IaaS]), which is projected to grow 38.4 percent in 2016.

“The market for public cloud services is continuing to demonstrate high rates of growth across all markets and Gartner expects this to continue through 2017,” said Sid Nag, research director at Gartner. “This strong growth continues reflect a shift away from legacy IT services to cloud-based services, due to increased trend of organizations pursuing a digital business strategy.”

The biggest reason for this growth is the clear flexibility that you can from working with a cloud and service provider. Why is this the case? Because cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered “as a service” using Internet technologies.

This is where the modern service provider and the Open Compute Project (OCP) come in

With all of these new demands around new kinds of services and delivery methodologies – service providers simply needed a new way to deliver and control resources. This means building an architecture capably of rapid scalability and follows efficient economies of scale for a business. To accomplish this, there needed to be a revolutionary new way to think about the service provider data center and the architecture that defines it. This kind of architecture would be built around open standards and open infrastructure designs.

With that in mind, we introduce three very important topics.

  1. Understanding the Open Compute Project (OCP)
    • Founded in 2011, the Open Compute Project has been gaining attention from more and more organizations. So, who should be considering the Open Compute platform and for what applications? The promise of lower cost and open standards for IT servers and other hardware seems like a worth-while endeavor; one that should benefit all users of IT hardware, as well as improving the energy efficiency of the entire data center ecosystem. The open source concept has proven itself successful for software, as witnessed by the widespread adoption and acceptance of Linux, despite early rejection from enterprise organizations.

The goal of Open Compute?

  • To develop and share the design for “vanity free” IT hardware which is energy efficient and less expensive.
  • OCP servers and other OCP hardware (such as storage and networking) in development are primarily designed for a single lowest common denominator — lowest cost and basic generic functions to serve a specific purpose. One OCP design philosophy is a “vanity free” no frills design, which starts without an OEM branded-faceplate. In fact, the original OCP server had no faceplate at all. It only used the minimal compo­nents necessary for a dedicated function — such as a massive web server farm (server had no video chips or connectors).
  1. Cloud Providers Are Now Using Servers based on OCP design
    • Open compute servers are already generating a lot of interest and industry buzz. Imagine being able to architect completely optimized server technologies which deploy faster, are less expensive, and have just the right features that you need for scale and efficiency.
    • This is where the new whitebox and OCP family of servers comes in. With an absolute focus on the key challenges and requirements of industry’s fastest-growing segment – the Service Provider –These type of servers take the OCP conversation to a new level. The customization level of these servers allows you the capability to design and deliver everything from stock offerings to custom systems; and even component-level designs. You also get system integration and data center support. The ultimate idea is to create economies of scale to drive TCO lower and ROI higher for those where “IT is the business.”
  2. Clear demand for OCP and “vanity-free” server architecture
    • According to IDC, service providers will continue to break new ground in search of both performance gains and cost reductions as they expand their cloud architecture implementations. Additionally, the hosting-as-a-service model will continue to transition away from traditional models toward cloud-based delivery mechanisms like infrastructure as a service, spurring hyperscale growth in servers used for hosting (15% to 20% CAGR from 2013 to 2018).
    • At Data Center Knowledge, we conducted a survey, sponsored by HP, to find out what types of workloads are being deployed, what service providers value, and where the latest server technology can make a direct impact. The results, from about 200 respondents, showed us what the modern data center and service provider really needed from a server architecture. They also showed clear demand from servers capable of more performance, while carrying fewer “bells and whistles.”
      • 51% of respondents said that they would rather have a server farm with critical hardware components and less software-add-ons.
      • When asked: “How much do server (hardware and software) add-on features impact your purchasing decision? (Easy-to-access drive holders, memory optimizations, easy upgradability, software management, etc.)” 73% of the survey respondents indicated that this was either important, or very important to them.

Here’s the reality – there is big industry adoption around OCP as well. Facebook is one of those organizations. According to Facebook, a small team of their engineers spent the past two years tackling a big challenge: how to scale our computing infrastructure in the most efficient and economical way possible.

The team first designed the data center in Palo Alto, before deploying it in Prineville, Oregon. The project resulted in Facebook building their own custom-designed servers, power supplies, server racks, and battery backup systems.

What did this mean for Facebook and their new data center?

  • Usage of a 480-volt electrical distribution system to reduce energy loss.
  • Remove anything in their servers that didn’t contribute to efficiency.
  • Reuse hot aisle air in winter to both heat the offices and the outside air flowing into the data center.
  • Eliminate the need for a central uninterruptible power supply.

Ultimately, this design produced and environment capable of consuming 38 percent less energy to do the same work as Facebook’s existing facilities, while costing 24 percent less.

This is where as a cloud-builder, service provider, or modern large enterprise you can really feel the impact. The concept of servers, without all the add-ons and built around OCP design standards, has sparked interest in the market since this type of server architecture allows administrators to scale out with only the resources that they need. This is why we are seeing vanity-free server solutions emerge as the service provider business model evolves.

Source: TheWHIR

Five Security Features That Your Next-Gen Cloud Must Have

Five Security Features That Your Next-Gen Cloud Must Have

With cloud computing, virtualization, and a new type of end-user – the security landscape around the modern infrastructure needed to evolve. IT consumerization and a lot more data within the organization has forced security professionals to adopt better ways to protect their environment. The reality is that standard firewalls and UTMs are just no longer enough. New technologies have emerged which can greatly enhance the security of a cloud and virtualization environment – without impacting performance. This is where the concept of next-generation security came from.

It was the need to abstract physical security services and create logical components for a powerful infrastructure offering.

With that in mind – let’s look at five great next-gen security features that you should consider.

  1. Virtual security services. What if you need application-level security? What about controlling and protecting inbound, outbound, and intra-VM traffic? New virtual services can give you entire virtual firewalls, optimized anti-virus/anti-malware tools, and even proactive intrusion detection services. Effectively, these services allow for the multi-tenant protection and support of network virtualization and cloud environments.
  2. Going agentless. Clientless security now directly integrates with the underlying hypervisor. This gives your virtual platform the capability to do fast, incremental scans as well as the power to orchestrate scans and set thresholds across VM’s. Here’s the reality – you can do all of this without performance degradation. Now, we’re looking at direct virtual infrastructure optimization while still maintaining optimal cloud resource efficiency. For example, if you’re running on a VMware ecosystem, there are some powerful “agentless” technologies you can leverage. Trend Micro’s Deep Security agentless anti-malware scanning, intrusion prevention and file integrity monitoring capabilities help VMware environments benefit from better resources utilization when it comes to securing VMs. Further, Deep Security has been optimized to support the protection of multitenant environments and cloud-based workloads, such as Amazon Web Services and Microsoft Azure.
  3. Integrating network traffic with security components. Not only can you isolate VMs, create multi-tenant protection across your virtual and cloud infrastructure, and allow for application-specific protection – you can now control intra-VM traffic at the networking layer. This type of integration allows the security layer to be “always-on.” That means security continues to be active even during activities likes a live VM migration.
  4. Centralized cloud and virtual infrastructure management/visibility. Whether you have a distributed cloud or virtualization environment – management and direct visibility are critical to the health of your security platform. One of the best things about next-generation security is the unified visibility the management is capable of creating. Look for the ability to aggregate, analyze and audit your logs and your entire security infrastructure. Powerful spanning policies allow your virtual infrastructure to be much more proactive when it comes to security. By integrating virtual services (mentioned above) into the management layer – administrators are able to be proactive, stay compliant, and continuously monitor the security of their infrastructure.
  5. Consider next-gen end-point security for your cloud users. There are some truly disruptive technologies out there today. Here’s an example: Cylance. This security firm replaces more traditional, signature-based, technologies with a truly disruptive architecture. Basically, Cylance uses a machine-learning algorithm to inspect millions of file attributes to determine the probability that a particular file is malicious. The algorithmic approach significantly reduces the endpoint and network resource requirement. Because of its signature-less approach, it is capable of detecting both new threats and new variants of known threats that typically are missed by signature-based techniques. Here’s the other really cool part – even when your users disconnect from the cloud, they’re still very well protected. Because the Cylance endpoint agent does not require a database of signatures or daily updates, and is extremely lightweight on network, compute, and data center resources – it can remain effective even when disconnected for long periods.

Your environment is going to become more distributed. Virtual environments allow for greater scale where administrators are able to replicate data, better support distributed users, and deliver more complex workloads. Throughout all of this – you will need to ensure that your data points are secure. The dependence on the IT framework will only increase the amount of workloads we place into the modern data center and virtual platform. Because of this – it’s critical to deploy powerful security features while still maintaining optimal performance.

Next-generation security technologies do just that. We are now introducing powerful – scalable – ways to deploy security solutions into the modern cloud and virtualization environment. As you build out your virtual and cloud platforms, make sure to look at security solutions which utilize next generation features.

Ultimately, you’ll create a more efficient platform, improve end-user experiences, and be able to control your security environment on a truly distributed scale.

Source: TheWHIR