Pentagon Bug Bountry Program Attracks Strong Hacker Interest

Pentagon Bug Bountry Program Attracks Strong Hacker Interest

The Pentagon is at the midpoint of a crowdsourcing initiative that has attracted about 500 researchers to sign up for the opportunity to search for bugs in the agency’s Websites.

The Pentagon’s bug bounty program hit its midway point this past week, and already the initiative is, in some ways, a success. More than 500 security researchers and hackers have undergone background checks and begun to take part in the search for security flaws, according to HackerOne, the company managing the program.The “Hack the Pentagon” pilot, announced in March, is the first federal government program to use a private-sector crowdsourcing service to facilitate the search for security flaws in government systems.The $150,000 program started two weeks ago and will continue for another two weeks. While neither the Pentagon nor HackerOne has disclosed any of the results so far, Alex Rice, chief technology officer and co-founder of vulnerability-program management service HackerOne, stressed that it would be “an extreme statistical outlier” if none of the researchers found a significant vulnerability.”What I can say is that we haven’t seen any of [these programs] launched, even those with a smaller number of individuals, where the researchers have found nothing,” he told eWEEK. “No one who launches these bounty programs expects to find nothing.”

The Pentagon’s program is the first bug bounty effort sponsored by the federal government, but it will not likely be the last, because companies and government agencies are on the wrong side of an unequal security equation: While defenders have to hire enough security workers to find and close every security hole in their software and systems, attackers only have to find one, said Casey Ellis, CEO and founder of BugCrowd, a vulnerability-bounty organizer.

“The government is in a really bad position right now, which comes from being outnumbered by the adversaries,” he said. “They can’t hire security experts fast enough, and in the meantime they are still being hacked.”Crowdsourcing some aspects of their security work offsets part of the inequality in the math facing these companies, he said.The Department of Defense program, however, is on a much larger scale than most initial commercial efforts, HackerOne’s Rice said. Other efforts typically use dozens of security researchers, rather than hundreds.The Pentagon should get good results because the sheer number of hackers means they will have more coverage of potential vulnerabilities.”Even hiring the best security experts that you are able to find, that will still be a much smaller pool than if you could ask everyone in the world, or in the country,” Rice said. “You really can’t do security effectively unless you come at it from every possible angle.”U.S. Secretary of Defense Ash Carter characterized the initiative as a way for the government to take new approaches to blunt the attacks targeted at the agency’s networks.”I am always challenging our people to think outside the five-sided box that is the Pentagon,” he said in a statement at the time. “Inviting responsible hackers to test our cyber-security certainly meets that test.”The bug bounty pilot started on April 18 and will end by May 12, according to the Department of Defense. HackerOne is slated to pay out bounties to winners no later than June 10. The Department of Defense has earmarked $150,000 for the program.The DOD called the initiative a step toward implementing the administration’s Cyber National Action Plan, a strategy document announced Feb. 9 and which calls for the government to put a priority on immediate actions that bolster the defenses of the nation’s networks. The program is being run by the DOD’s Defense Digital Service, which Carter launched in November 2015.While finding and fixing vulnerabilities is important, the program could also create a potential pipeline to recruit knowledgeable security workers into open positions in the federal government, Monzy Merza, director of cyber research at data-analysis firm Splunk, said in an email interview.”Discovery and fixing of vulnerabilities is a good thing,” he said. “Creating an opportunity for individuals to test their skills and learn is also important. And there is a general shortage of skilled security professionals. Putting all these pieces together, a bug bounty program creates opportunities for people to learn and creates a human resource pool in a highly constrained market.”While attacking government systems may thrill some hackers and make others too nervous to participate, the actual program differs little from the closed bug hunts sponsored by companies, HackerOne’s Rice said.The security firm’s programs—and other efforts by BugCrowd and TippingPoint’s Zero-Day Initiative, now part of security firm Trend Micro—vet security researchers and hackers to some extent before allowing them to conduct attacks on corporate services and Websites, especially production sites. In the Pentagon’s case, more extensive background checks were conducted.In the end, the programs allow companies to spend money on security more efficiently, only paying for results, not just hard-to-find workers, he said.”Companies are not insecure because of a lack of money to spend on security,” Rice said. “There is a ridiculous amount of money being inefficiently and ineffectively spent on security. Even if we could hire all the security experts in our town or in our field, we could not possibly level the playing field against the adversaries.”
Source: eWeek

Webtrends Infinity Analytics – A New Breed of Analytics for IoT

Webtrends Infinity Analytics – A New Breed of Analytics for IoT

Our friends over at Webtrends just released the infographic below to explain some of the differences between analytics solutions of yesterday vs. today and provides details into what’s coming with their new solution Infinity Analytics. Uniting scale and flexibility with speed and accuracy, Infinity Analytics harnesses big data to deliver truly actionable customer intelligence.

[embedded content]

Webtrends_Infinity_Infographic_042016

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

Enterprises Turn to SD-WANs to Improve Branch Office Connectivity

Enterprises Turn to SD-WANs to Improve Branch Office Connectivity
SD-WANs offer more flexibility, agility and affordability in branch office networks, and there’s a crowded field of vendors giving customers a lot of options. Joe Tan knew he was going to have to improve his company’s WAN environments.Devcon Construction is a commercial building company based in Milpitas, part of Northern California’s Silicon Valley. But it has dozens of construction offices and remote sites big and small throughout Northern California.Connectivity to the central office was important, but the company had to rely on whatever options were available at the individual sites. Some could get Multiprotocol Label Switching (MPLS) while others needed to use T1 connections, 4G wireless devices or even other technologies.The patchwork of disparate connections created an array of problems for Devcon, from high costs and security concerns to traffic bottlenecks, network management and visibility issues, not to mention high time demands on a small IT staff, according to Tan, Devcon’s director of IT.

Audio and video collaboration was difficult because transferring large files could result in high bandwidth consumption and slow performance. Meanwhile, having service providers set up MPLS connections could be expensive and time-consuming.

“We have a lot of construction sites in Northern California, and they all need to connect back to our headquarters,” Tan told eWEEK. “Reliable connectivity is really important for our business to run.”Tan started investigating technology options for the company’s wide-area network (WAN) about two years ago. About 18 months ago, he started talking with VeloCloud Networks, one of a growing number of vendors in the rapidly emerging software-defined WAN (SD-WAN) market.Devcon ran a proof-of-concept with the VeloCloud technology and has since standardized its WAN environment on the vendor’s products.VeloCloud’s software products run on standard x86 systems in a company’s branch offices or remote sites as well as in the cloud by connecting to VeloCoud Gateways housed in cloud data centers worldwide run by Amazon Web Services, Equinix and others.The gateways ensure that all applications and workloads are delivered via the most optimized data paths and enable network services to be delivered from the cloud. VeloCloud Edges are zero-touch appliances at the remote and branch sites that provide secure connectivity to applications and services. They also offer such features as deep application recognition, performance metrics, virtual network functions (VNF) hosting and quality-of-service (QoS) capabilities.Centralized management is provided by VeloCloud Orchestrator for installation, configuration, one-click provisioning of virtual services and real-time monitoring.For Tan, it meant more control over the WAN environments—from management to security to high performance—and the ability to address issues centrally rather than having to constantly send tech pros to multiple sites, a significant win for a company that has an IT staff of five people.”For a company that doesn’t have a lot of IT people, this was a quick and easy way to get reliable and powerful WAN service and not have to spend a lot on infrastructure,” he said.

The Cloud Drives Interest in SD-WAN

For much of the past decade or more, not much new had happened in the enterprise networking space, the WAN included.That’s changed over the past couple of years, as network virtualization—including software-defined networking (SDN) and network-functions virtualization (NFV)—has come to the forefront to help enterprises address the challenges brought by such trends as the cloud, big data, mobility and the Internet of things (IoT).More recently, innovation in the network has spilled over to the WAN with enterprises and service providers looking to SD-WAN technologies to make their networks more flexible, agile and affordable.The WAN over the decades has relied on various connectivity protocols, from Synchronous Optical Network (SONET) and Asynchronous Transfer Mode (ATM) to MPLS. However, none of these options were made for a cloud-centric world. 
Source: eWeek

Google's Antitrust Worries Won't Go Away Just by Paying Huge EU Fines

Google's Antitrust Worries Won't Go Away Just by Paying Huge EU Fines
NEWS ANALYSIS: Google’s problems with the EU stem from Android phones, and the fact that Google insists on including its own browser and shopping apps. After six years of wrangling with European antitrust regulators, it appears that Google is about to be fined for its insistence that Android phone makers include its Chrome browser and because of how it handles shopping inquiries for hotels and flights, among other items.Adding to the problem, Google’s parent company, Alphabet, was charged a few days ago with using Android to squeeze out rivals, according to a report in Reuters.The problem seems to be that Google is willing to enter into a consent decree in which it does not admit fault, which is the way things work in the United States. Things are different in the EU, however, and regulators there are insisting that Google admit that it was at fault for its conduct and that the company explain how it’s going to change things.European Competition Commissioner Margrethe Vestager, meanwhile, has been receiving a steady stream of complaints from European and U.S. companies about Google’s practices. Because of this, she’s probably not going to scale back the EU’s demands.

So what’s going on here? After all, isn’t Google doing exactly the same thing as Microsoft and Apple? Well, no, they’re not.

While Microsoft does include a browser and lots of apps with the copy of Windows 10 that runs on phones, its market share is so tiny it may be in negative numbers. In addition, until Microsoft and Google agreed to stop suing each other, Microsoft was one of the companies complaining about Google. Apple’s phones aren’t being built by anyone but Apple, so the company isn’t in a position to use iOS or its own apps to dominate other companies.Google provides Android to lots of phone makers and requires that some of its own software and search be included. This makes it the one mobile operating system company that has crossed the line and incurred the ire of European regulators.It’s also probably no coincidence that European regulators are less than totally enamored with the idea of a dominant American company pushing around smaller non-American companies, a few of which are European.Other U.S. companies, notably Microsoft, have long since learned to their grief that the EU doesn’t appreciate American dominance. This is one reason why European computer users are able to buy Windows in Europe without Microsoft’s browsers. That company, eventually paid billions in fines after its 10-year-battle with the same EU Competition Commission.There’s little doubt that Google will also end up paying billions of its own. The EU has little reason to compromise with Google, since the company doesn’t really have anything to offer the EU in terms of a settlement, except, of course, its money and a change in behavior. Worse, if Google tries to go to court to get the commission off its back, its chances of winning are slim.So what’s going to happen? Google will be forced into a settlement of the EU’s choosing at some point in the future. If Google plans to keep selling its software and services in Europe, it will have to agree to whatever terms the EU demands, which will include fines, probably in the billions. It will have to change the requirement that companies making and selling mobile phones in Europe include its browser and search service. 
Source: eWeek

Malware Posing as Legitimate Apps on Google Play, Security Firm Warns

Malware Posing as Legitimate Apps on Google Play, Security Firm Warns

PhishLabs says it has discovered 11 malicious apps posing as popular payment apps on Google’s official Android application store.

The people most at risk of downloading Android malware on their mobile devices are those who install apps from unofficial third-party mobile application stores. But that doesn’t mean that those who download apps from Google’s official Google Play store are completely immune to malicious software.PhishLabs, a company that provides anti-phishing services, this week said it has discovered 11 malicious applications disguised as mobile apps for popular online payment services on Google Play since the beginning of this year.The applications purport to give users access to their online payment accounts from their mobile devices, PhishLabs security analyst Joshua Shilko said in a blog post this week. But in reality, the only functionality the apps have is to collect the user’s logon credentials and personal data and to send that to a remote command and control server belonging to the malware authors, Shilko said.PhishLabs did not identify the 11 payment brands whose apps were spoofed and uploaded to Google Play. According to Shilko, 10 of the companies whose customers are being targeted by the malicious apps provide links in their Websites directly to their mobile applications. One of the companies being targeted explicitly notes on its Website that it has no mobile application, he added. All of the apps appear to have been developed by the same malware author or authors.

Android owners who mistakenly download and use the fake apps are presented with a Web page designed to look and act like the real brand’s Web page. Any logon credentials a user supplies to the fake app are immediately sent to the attacker.

The phishing apps then present the user with more forms seeking additional information such as the answers the user might have supplied to the apps’ security questions. Once the malware has collected and sent all the information, it presents the user with an error message claiming that either the username and password combination was wrong or some other similar error.Google did not respond to a message seeking information on how the same attackers might have managed to upload 11 malicious apps to its Google Play store since the beginning of January.Google, which used to have relatively little controls for checking the security of applications loaded to its Android app store, these days reviews all submissions using a combination of manual and automated security testing processes.But the presence of the malicious payment apps in Google Play suggests more work needs to be done in this regard, Shilko said. All of the malicious applications that PhishLabs identified went through Google’s security review process. The fact that none was identified as malware, despite some obvious red flags, raises questions about the effectiveness of Google’s security review processes, he said.In separate comments to eWEEK, Shilko said PhishLabs has been communicating with Google regularly regarding each application as it is detected. “We also communicate with the registrars and hosting providers whose infrastructure is being utilized for the related phishing content,” he said. “At of the time of publication, all of the applications referenced in the post had been removed except for one.”
Source: eWeek

What You Have Wrong: Three Myths about RAIDs

What You Have Wrong: Three Myths about RAIDs

DriveSavers has performed tens of thousands of RAID data recoveries in our history. Yes, we’ve seen our share of RAIDs. We’ve also heard our fair share of stories—some accurate and others, not so much.

Ready to get your facts straight? Here are three common myths about RAIDs.

Myth #1: All RAIDs are Redundant

Redundancy is one of the two biggest reasons users choose to use RAIDs, the other being performance.

Most RAID setups provide redundancy, in which the same data is located on different drives. This is beneficial for two reasons: 1) if a drive fails, it can be replaced without loss of data or interruption of work; and 2) basic physical maintenance can often be performed (i.e. swapping older drives before they crash, etc.) without interruption to work.

But despite the word “redundancy” being the first letter of RAID (redundant array of independent disks), not all RAIDs actually have redundancy incorporated.

A RAID 0 setup involves “striping” data across 2 or more drives so that different pieces of a single file live on every drive in the system. RAID 0 does not include copies of the data and, therefore, is not redundant. No matter how many drives are incorporated into this setup, if just one drive experiences a physical failure, the whole RAID is immediately inaccessible and data is lost. In fact, for this reason the chances of losing data are actually multiplied when using a RAID 0 as opposed to a single drive. The more drives used in this setup, the more likely the chance of data loss.

It’s like having kids. The more kids you have, the more likely one of them will catch that flu that’s been going around. And once one of them catches it, the whole family is doomed.

So why would anyone use a RAID 0? The answer is performance. Files are always split into pieces, whether you are using a single drive or a RAID. When pieces of a file are spread across multiple drives, they can be pulled from all of those drives at once rather than just from one drive. To paint a picture, pretend you have two halves of an apple. You will be able to grab the whole apple faster with two hands than with one. This is because you can grab both halves at the same time when using two hands, but only one half at a time when using one hand. In much the same way, the more drives used in your RAID 0 the greater the data transfer rate (the rate at which data moves from one place to another). Just make sure you’re backing it all up.

Myth #2: RAIDs are Backups

RAID 5, RAID 6 and mirrored systems typically have redundancy built in, which serves to help lessen the risk of losing data when a drive fails physically. Still, these devices most certainly are NOT backups. If too many drives fail, a user accidentally erases files, the RAID gets corrupted or malicious programs take control and encrypt the contents, your data can be lost forever.

Unfortunately, many users see RAID and think they are protected. This is a terrible assumption to make. Most of the RAID systems we’ve seen over the years have had redundancies built in, but have still had multiple failures, data corruption or deletion of data (either targeted or accidental).

Something that’s good to keep in mind: when you purchase a complete RAID system, all of the drives that make it up are usually the same make, model and age. Identical drives tend to have very similar, or even identical, life spans. Don’t forget that while one failed drive may not cause you to lose data, multiple failed drives certainly will.

Myth #3: RAID Failure is Always Obvious

If a single drive fails in a RAID with parity or redundancy in place, the system will continue to run in degraded mode at lower performance speed. Since even in degraded mode, users have access to all of their data, they may not notice that anything has changed. In this case, they will carry on, happily unaware, until the next drive fails. Then, catastrophe.

A dedicated system administrator who regularly and systematically checks a RAID for any problems or concerns may recognize when one of the drives has failed and replace it before any further failures occur. But what if two or more drives fail at once, as often happens? Remember—the drives in a RAID are often all the same make, model and age with the same life span and likelihood of failure.

The Truth: Even RAIDs Need to be Backed Up

All media, from phones to thumb drives to hard drives to RAIDs, should have multiple copies of data. The end user or system administrator has to look at the risk of not having an hour-, day-, week-, month- or year’s worth of data. If the data loss would be too devastating for a specific time period, then they have to find a solution to copy data to another media, such as another RAID, the cloud or tape—anything that will ensure that the data is protected when their RAID either fails or has another data loss situation.

Mike CobbContributed by: As Director of Engineering at DriveSavers, Mike Cobb manages the day-to-day operations of the Engineering Department including the physical and logical recoveries of rotational media, SSDs, smart devices and flash media. He also oversees the R&D efforts for past, present and future storage technologies. Mike makes sure that each of the departments and their engineers are certified and that they continue to gain knowledge in their field. Each DriveSavers engineer has been trained by Mike to ensure the successful and complete recovery of data is their top priority. Mike Cobb has a B.S. degree in Computer Science from the University of California, Riverside. Since joining DriveSavers in 1994, Mike has worked on all aspects of engineering as well as heading the Customer Service Department for several years.

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

Infosys Launches Mana – a Knowledge-based Artificial Intelligence Platform

Infosys Launches Mana – a Knowledge-based Artificial Intelligence Platform

Infosys_logoInfosys (NYSE: INFY), a global leader in consulting, technology, and next-generation services, announced the launch of Infosys Mana, a platform that brings machine learning together with the deep knowledge of an organization, to drive automation and innovation – enabling businesses to continuously reinvent their system landscapes. Mana, with the Infosys Aikido service offerings, dramatically lowers the cost of maintenance for both physical and digital assets; captures the knowledge and know-how of people and fragmented and complex systems; simplifies the continuous renovation of core business processes; and enables businesses to bring new and delightful user experiences leveraging state-of-the-art technology.

Over the last 35 years, Infosys has maintained, operated and managed systems with global clients across every industry. Building on this deep experience, Infosys has recognized the need to bring artificial intelligence to the enterprise in a meaningful and purposeful way; in a way that leverages the power of automation for repetitive tasks and frees people to focus on the higher value work, and on breakthrough innovation. Today’s AI technologies address part of this with learning and information; Infosys is now bringing this together in a fundamental way with knowledge and understanding of the business and the IT landscape – critical knowledge that is locked inside source code, application silos, maintenance logs, exception tickets and individual employees.

Infosys has already started working with a number of clients:

  • For a company with a large fleet of field engineers, individual productivity improved by up to 50% by utilizing the self-learning capabilities of the platform
  • For a major global telecommunications firm entry effort of agents was reduced by up to 80% by automating order validation and removing the need for corrective processes
  •  For a global food and beverage manufacturer, Mana assisted sales managers in automating the sales planning processes by automatically resolving maintenance tickets of recurring issues. As the system self-learned, over time it provided solutions to known problems automatically, helping reduce the time required to provide a solution to a maintenance problem

Infosys Mana

The Mana platform is part of the Infosys Aikido framework that helps companies undertake non-disruptive transformation of their existing landscapes

  • Ki: capturing the knowledge within legacy systems to renew, accelerate and enable them to bring entirely new experiences
  • Ai: delivering open, intelligent platforms that bring transformation – new kinds of applications, software tools, unprecedented levels of data processing, a radical new cost performance
  • Dō: design-led services which bring a Design Thinking approach that starts with a deep understanding of a client’s business and IT objectives, its users and customers, to find their most critical problems and biggest opportunities

Infosys Mana is comprised of three integrated components all of which are based on open source technology:

  • Infosys Information Platform – an open source data analytics platform that enables businesses to operationalize their data assets and uncover new opportunities for rapid innovation and growth
  • Infosys Automation Platform – a platform that continuously learns routing logic, resolution processes and diagnosis logic to build a knowledge base that grows and adapts to changes in the underlying systems
  • Infosys Knowledge Platform – a platform to capture, formalize and process knowledge and its representation in a powerful ontology based structure that allows for the reuse of knowledge as underlying systems change

Sign up for the free insideBIGDATA newsletter.

Source: insideBigData

Iron Mountain Opens 'Near-Infinite' Capacity Cloud Archive

Iron Mountain Opens 'Near-Infinite' Capacity Cloud Archive

Based on EMC Elastic Cloud Storage, this archive features cost-efficient, rapid and anytime-anywhere access with enterprise-class service-level agreements.

Iron Mountain, the patriarch of all data storage companies at age 65, launched a new cloud archive offering April 28 that gives the company a fresh approach for the digital 21st century.The IM Cloud Archive is described as a cost-effective repository for organizations needing to protect and preserve data retained short or long term for compliance, legal or value-creation purposes. This is not unlike most cloud archive services offered by companies such as Amazon, Hewlett-Packard Enterprise, Google, Microsoft, IBM, Livefyre, CommVault and a list of others.What is a bit different is that it’s stored for the long term at Iron Mountain’s underground facility in Boyers, Pa. It is also designed to enable “near-infinite” scalability as the volume of data grows, using a pay-as-you-use model, the company said.Iron Mountain built its reputation on its ultra-secure underground storage facilities in several locations around the world. One of them, in West Los Angeles, holds the original prints of most of Hollywood’s old movies from the last century.

The company lays to rest the doubts of potential enterprise customers about exactly where their important data and files will be stored in its cloud system. Iron Mountain tells them up front where everything is and how it can be accessed through the cloud, which in this case is operated by EMC.

Unlike on-premises disk and tape infrastructures, Iron Mountain Cloud Archive doesn’t require customer-owned storage equipment or archiving platforms, and it eliminates resources to manage tape- and disk-refresh life cycles, while avoiding the cost and hassle of a technology migration from one generation of tape or disk to another.Based on EMC Elastic Cloud Storage, a global multi-purpose platform, this archive features cost-efficient, rapid and anytime-anywhere access with meaningful enterprise-class service-level agreements (SLAs).The archive also has end-to-end information management and disaster recovery and data preservation, the company said.Data movement to and from Iron Mountain Cloud Archive is handled in the open platform using ECS that works natively with Amazon S3 API, OpenStack SWIFT, CAS, Atmos and several cloud gateways, including EMC CloudBoost, CloudArray and Isilon Cloud Pools, Nasuni, Panzura, Seven10 and CTERA.For more information, go here.
Source: eWeek

Microsoft Limits Cortana to Edge and Bing on Windows 10

Microsoft Limits Cortana to Edge and Bing on Windows 10

Claiming to protect Windows 10’s integrated search experience, Microsoft is locking down its Cortana virtual assistant.

Cortana, Microsoft’s virtual assistant technology included with the Windows 10 operating system, is being reined in, the company announced yesterday.Like Siri on Apple’s iOS devices, users can call on Cortana to search the Web for sport scores, weather forecasts and answers to a variety of questions. As the Windows 10 user base has grown—270 million devices are running the OS at last count—Microsoft has discovered that Cortana has been taken in unintended directions, resulting in what the company claims is an unreliable user experience.”Some software programs circumvent the design of Windows 10 and redirect you to search providers that were not designed to work with Cortana,” Ryan Gavin, general manager of Microsoft Search and Cortana, said in a statement. In particular, they can interrupt some of Cortana’s task completion and personalized search capabilities, he said.In response, Microsoft is locking down the Cortana search experience.

Betting that no one likes a flaky assistant, virtual or otherwise, the company has taken steps to refocus Cortana’s attention. Gavin said that “to ensure we can deliver the integrated search experience designed for Windows 10, Microsoft Edge will be the only browser that will launch when you search from the Cortana box,” which appears next to the Start menu icon.

Wasting little time, Microsoft quickly followed up on its word.As recently as this morning, Cortana would display Bing search results in a user’s preferred Web browser, or Google Chrome in this writer’s case. Now, while responding to a spoken or typed question that requires a Web search, Cortana will open up the Edge browser instead.The change doesn’t extend to the rest of Windows 10, Gavin noted. Users can still set their own default browsers using the operating system’s default app manager and select their favored search engine within their Web browser’s customization options, he affirmed.Microsoft isn’t the only company facing challenges with commercializing its digital personal assistant technology.Last week, it was revealed that Apple had settled a 2012 Siri patent lawsuit for $24.9 million, which will be paid to Dynamic Advances, a subsidiary of the Marathon Patent Group. The legal battle stems from a voice-activated app technology patent issued to the Rensselaer Polytechnic Institute in Troy, N.Y., four years before Siri first appeared in the iPhone 4S. Apple is immediately paying Marathon $5 million, with the rest to follow after several conditions are satisfied.Meanwhile, in preparation for this summer’s highly anticipated Anniversary Update for Windows 10, Cortana is gaining some new Office-related functionality.A new Windows Insider preview build (14332) released this week enables users to search Office 365 content with Cortana. After linking their Office 365 work or school accounts with Cortana’s notebook, users can ask the assistant to find files stored on OneDrive for Business and SharePoint, as well as emails, calendar events and contacts. Also included are tweaks to the Cortana Reminders interface.
Source: eWeek

Defining the OpenStack Cloud Roadmap

Defining the OpenStack Cloud Roadmap

VIDEO: Jonathan Bryce, executive director of the OpenStack Foundation, gives the new roadmap effort a ‘D’ grade, but expects improvement soon.

AUSTIN, Texas—In the standard philosophy that has defined open-source since its earliest days, developers simply “scratch an itch” for things they want done, rather than follow predefined product roadmaps. At the OpenStack Summit in Austin, Texas, this week, one of the things that was discussed was the emergence of the OpenStack Foundation roadmap effortto help provide visibility into what is coming next for the cloud.In a video interview with eWEEK, Jonathan Bryce, executive director of the OpenStack Foundation, discusses his views on the roadmap effort and how he expects it to evolve.While OpenStack has not previously had a formal roadmap, there are multiple technical items, including community blueprints and feature requests that have been used since OpenStack was first created in 2010, that do serve to provide some direction.”In a software company, product managers can go and dictate a roadmap and define what resources will work on specific efforts,” Bryce said. “You can’t do that in open-source.”

The OpenStack roadmap effort isn’t trying to dictate a roadmap either, but rather is attempting to collate all the different sources of information about what contributors and operators are doing or want to do, as they build OpenStack. Bryce noted that the OpenStack roadmap effort is providing a prioritized view of what people are talking about in the community.

“I would give us a ‘D’ right now in terms of the grade and how well it’s working because it’s our first iteration,” Bryce said. “For us, the roadmap team is about collecting information from developers and users so people have good data about what is being worked on.”Watch the full video interview with Jonathan Bryce, executive director of the OpenStack Foundation, below:

Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.
Source: eWeek