Archive for May, 2011


Data in the “Cloud” Needs Fourth Amendment Protection

May 30, 2011

“Cloud computing” is the term for applications that are handled by third-party software and storage on the Internet, like Google Docs and QuickBooks Online, as opposed to programs like Microsoft Word and Quicken, which you load and access from your PC.

Gmail and Hotmail were early examples of cloud computing. The cloud computing concept has since expanded to include popular applications like photo editing and sharing, money management and social networking. It also takes in the increasing number of cloud-based storage services, like Dropbox, which allows you to port documents from client to client, and Carbonite, which performs near real-time back-up of data and documents on your PC.

What most Americans don’t realize is that data stored in the cloud is not protected by the Fourth Amendment the way that same data is if stored on a PC, CD or detachable hard drive in the home. A new bill in Congress, S.1011, introduced last week by Sen. Patrick Leahy (D-VT), as a big step toward closing this loophole. S.1011, also cited by Berin here, extends the due process provisions against illegal wiretapping in the existing Electronic Communications Privacy Act (ECPA) to personal data stored in data centers owned and operated by third parties.

Source Data in the “Cloud” Needs Fourth Amendment Protection


Cloud CIO: The Two Biggest Lies About Cloud Security

May 28, 2011

Framing the cloud security discussion as a “public cloud insecure, private cloud secure” formula indicates an overly simplistic characterization. Put simply there are two big lies (or, more charitably, two fundamental misapprehensions) in this viewpoint, both rooted in the radical changes this new mode of computing forces on security products and practices.

Cloud Security Misapprehensions #1
The first is that private cloud computing is, by definition, secure merely by way of the fact that it is deployed within the boundaries of a company’s own data center. This misunderstanding arises from the fact that cloud computing contains two key differences from traditional computing: virtualization and dynamism.

The first difference is that cloud computing’s technological foundation is based on the presence of a hypervisor, which has the effect of insulating computing (and the accompanying security threats) from one of the traditional tools of security: examining network traffic for inappropriate or malicious packets. Because virtual machines residing on the same server can communicate completely via traffic within the hypervisor, packets can be sent from one machine to another without ever hitting a physical network, which is where security appliances are typically installed to examine traffic.
Crucially, this means that if one virtual machine is compromised, it can send dangerous traffic to another without the typical organizational protective measures even being involved. In other words, one insecure application can communicate attacks to another without the organization’s security measures ever having a chance to come into play. Just because an organization’s apps reside inside a private cloud does not protect it against this security issue.
Of course, one might point out that this issue is present with vanilla virtualization, without any aspect of cloud computing being involved. That observation is correct. Cloud computing represents the marriage of virtualization with automation, and it’s in this second element that another security shortcoming of private clouds emerges.

Cloud computing applications benefit from this automation to achieve agility and elasticity–the ability to respond to changing application conditions by moving virtual machines quickly and by spinning up additional virtual machines to manage changing load patterns. This means that new instances come online within just a few minutes without any manual interaction. This implies that any necessary software installation or configuration must also be automated so that when the new instance joins the existing application pool it can immediately be used as a resource.
It also implies that any required security software must, likewise, be automatically installed and configured without human interaction. Unfortunately, many organizations rely on security personnel or system administrators to manually install and configure necessary security components–often as a second step after the rest of the machine’s software components are installed and configured.

In other words, many organizations have a mismatch between their security practices and the reality of what a cloud requires. Assuming that a private cloud is, ipso facto, secure, is incorrect. Until your security and infrastructure practices align along automated instantiation, you have a vulnerability.
Moreover, it’s critical to get them aligned. Otherwise, you face the likelihood that your application automation will outstrip your security practices, which is not a good situation. For sure, one would not like to be in the position of trying to explain why the supposedly-secure private cloud ended up exposing a vulnerability because the automation characteristics of cloud computing had not been extended through all parts of the software infrastructure.
So, the first big lie about cloud computing is that private clouds are inherently secure. What is the second?

Cloud Misapprehensions #2
The second is about cloud computing security relates to assumptions about public cloud security; specifically, the assumption that security in public cloud computing rests solely with the CSP. The reality is that security in a service provider world is a responsibility shared between the provider and the user, with the former responsible for security in the infrastructure up through the interface point between application and hosting environment, and the user responsible for security with respect to interfacing with the environment, and importantly, within the application itself.
Failing to configure the application properly with respect to the environment security interface or failing to take appropriate application-level security precautions exposes the user to issues for which no provider can possibly be expected to take responsibility.

Let me provide an example. One company we worked with had placed its core application in Amazon Web Services (AWS). Unfortunately, it had not implemented appropriate security practices with respect to how it used AWS security mechanisms, nor with simple application design issues.

Amazon provides what is, in effect, a virtual machine-level firewall (called a Security Group) which one configures to allow packets to access specific ports. The best practice with respect to Security Groups is to partition them, so that very fine-grained port access is available per virtual machine. This ensures that only traffic appropriate for that type of machine goes to an instance. For example, web server virtual machines are configured to allow traffic on port 80 into the instance, while database virtual machines are configured to disallow traffic on port 80 into the instance. This blocks attacks on database instances (containing crucial application data) from the outside using web traffic.

To construct a secure application, one must use Security Groups properly. This organization had not. It used one Security Group for all traffic to all instances, which meant that every type of instance was exposed to any type of traffic destined for any instance. Clearly, a poor use of AWS security mechanisms.
Regarding the organization’s application itself, it had implemented poor security practices. Instead of partitioning application code among different types of machines, it had loaded all application code into a single instance, which meant the same instance that received traffic for its corporate website also had code containing proprietary algorithms running on it as well.

The important fact about this situation: If this organization assumed that all security responsibility lay with the CSP (Amazon Web Services, in this case), it would be extremely negligent, because it had not taken important steps to address security issues for which no CSP could be responsible. This is what shared responsibility implies–both parties have to step up to the security aspects in their control, and failing to do so means the application is not going to be secure. Even if the CSP does everything correctly for portions of the cloud application within its control, if the application owner fails to implement its security responsibility correctly, the application is going to be insecure.

The reality is that organizations are increasingly going to deploy applications in public CSP environments. It is vital that security groups step forward to ensure their organizations take every step possible to implement applications that are as secure as possible, and that means what steps the organization itself needs to take in that regard.

Security is, so to speak, the third rail of cloud computing. It is constantly cited as an inherent benefit of private clouds and a fundamental shortcoming of public cloud computing. Actually, the truth is far more ambiguous than these positions imply. Asserting the putative security shortcomings of public cloud environments without seriously considering how to mitigate them seems irresponsible and evidence of a belief that assertion implies dismissal with no further need to investigate mitigation techniques.

A poorly managed and configured private cloud application can be quite vulnerable, and a properly managed and configured public cloud application can achieve very good security. Characterizing the situation as black and white is simplistic and does a disservice to the discussion.

Far more productive in both environments is to query what actions must be taken to achieve as secure an application as possible, within the constraints of time, budget, and risk tolerance. Security is never a question of black or white, but rather a question of how light a shade of gray is possible, given the particulars of a specific environment and application. Failing to acknowledge that does a disservice to the topic and to how best to ensure an organization’s infrastructure is as efficient and cost-effective as possible.

Source Cloud CIO: The Two Biggest Lies About Cloud Security

Visit us at soon


Why Privacy Matters Even if You Have ‘Nothing to Hide’

May 27, 2011

When the government gathers or analyzes personal information, many people say they’re not worried. “I’ve got nothing to hide,” they declare. “Only if you’re doing something wrong should you worry, and then you don’t deserve to keep it private.”

The nothing-to-hide argument pervades discussions about privacy. The data-security expert Bruce Schneier calls it the “most common retort against privacy advocates.” The legal scholar Geoffrey Stone refers to it as an “all-too-common refrain.” In its most compelling form, it is an argument that the privacy interest is generally minimal, thus making the contest with security concerns a foreordained victory for security.

The nothing-to-hide argument is everywhere. In Britain, for example, the government has installed millions of public-surveillance cameras in cities and towns, which are watched by officials via closed-circuit television. In a campaign slogan for the program, the government declares: “If you’ve got nothing to hide, you’ve got nothing to fear.” Variations of nothing-to-hide arguments frequently appear in blogs, letters to the editor, television news interviews, and other forums. One blogger in the United States, in reference to profiling people for national-security purposes, declares: “I don’t mind people wanting to find out things about me, I’ve got nothing to hide! Which is why I support [the government’s] efforts to find terrorists by monitoring our phone calls!”

The argument is not of recent vintage. One of the characters in Henry James’s 1888 novel, The Reverberator, muses: “If these people had done bad things they ought to be ashamed of themselves and he couldn’t pity them, and if they hadn’t done them there was no need of making such a rumpus about other people knowing.”

I encountered the nothing-to-hide argument so frequently in news interviews, discussions, and the like that I decided to probe the issue. I asked the readers of my blog, Concurring Opinions, whether there are good responses to the nothing-to-hide argument. I received a torrent of comments:

•My response is “So do you have curtains?” or “Can I see your credit-card bills for the last year?”
•So my response to the “If you have nothing to hide … ” argument is simply, “I don’t need to justify my position. You need to justify yours. Come back with a warrant.”
•I don’t have anything to hide. But I don’t have anything I feel like showing you, either.
•If you have nothing to hide, then you don’t have a life.
•Show me yours and I’ll show you mine.
•It’s not about having anything to hide, it’s about things not being anyone else’s business.
•Bottom line, Joe Stalin would [have] loved it. Why should anyone have to say more.

Source Why Privacy Matters Even if You Have ‘Nothing to Hide’


How organizations should approach cloud security

May 25, 2011

A recent Gartner survey in India revealed that the top three concerns that organizations had with cloud services were: 1) Security concerns around data residing in a third party’s data center 2) Concerns with the reliability of cloud services and 3) Maturity of vendor offerings. Organizations have consistently cited data security considerations as the top barrier to cloud adoption, across geographies, verticals and company sizes. Of course, the discrete considerations and concerns within the larger scope of data security will depend on the size of the organization, criticality of the cloud workload to the business, regulatory/ compliance issues and type of service deployed in the cloud.

While there is significant hype around security issues as a barrier to cloud adoption, deciding whether a particular cloud provider is ‘secure enough’ is often a relative decision rather than an absolute one. Every provider is likely to have different underlying infrastructure, security policies and SLA’s and the onus is on the end user of the cloud service to map their security policies to what is on offer and whether it meets their requirements. In this scenario organizations should seriously consider deploying some form of third party cloud security and management tools that help end users assess and maintain a security view of the various applications or data they have deployed in the public cloud.

From an overall organizational perspective, alignment of the cloud service deployment to traditional IT processes and business units is essential. IT security policies and processes often link into larger business critical processes and it is important to review and assess audit, incident response, governance and compliance policies from a forward looking perspective.

Of course, the security implications and considerations can vary depending on type of service consumed. For example, even if an organization utilizing non critical SaaS applications is subjected to a security breach at the provider location, the impact is relatively localized and may not be able to impact the other in house applications and data that is critical to the business. However, organizations utilizing cloud IaaS to directly support or enable some part of the business will have to be more careful and stringent while selecting a cloud provider, with different SLA’s and performance guarantees. From an IaaS perspective, a majority of commodity public cloud providers have negligible remediation and SLA’s in case of disruption of service. However, some IT vendors turned cloud IaaS providers such as Fujitsu, HP and IBM are turning their focus to the enterprise segment and are in a position to offer better performance guarantees, albeit at an increased cost.

The other side of the equation is that cloud computing may actually increase the IT security mechanisms of smaller organizations, while simultaneously enabling net new business growth. While a large proportion of cloud providers today offer very little in terms of granular performance driven SLA’s, the actual IT security tools and firewalls deployed within may in fact have cost the organization a lot more to deploy if it had been in house.
One of the key issues that often confuses would-be cloud adopters is: What is the difference between security considerations in a cloud environment vs a traditional hosting environment? While it is true that organizations with mature vendor management, audit and patch management processes will certainly be able to adapt to cloud security policies faster than others, there are distinct cloud specific security considerations that need to be taken into account as well. Essentially, cloud users need to understand that the shared hardware multi tenant model of cloud computing (IaaS in particular) means that they relinquish more control to the cloud provider than they did to their hosting provider. Also, the changes that the cloud provider makes to its hardware or software infrastructure policies are in most cases not known to the end users. Organizations planning to utilize public cloud services should ensure that they get as much visibility as possible into the provider’s infrastructure, negotiate custom SLA’s for their deployments where possible and ensure that their organizational IT security policies cater for the potential risk of a data breach in the cloud

Overall, cloud security encompasses many different cloud deployment models and services and while it is easier for organizations to come up with individual security policies for each deployment, it is prudent for them to start thinking about a more comprehensive organization wide strategy for tackling future deployments of cloud services as well. Through 2015, as organizations start using public clouds as an enabler or even a key component of critical business processes, these unified cloud security policies will need to evolve in response to market dynamics and technology changes.

Source How organizations should approach cloud security


May 19, 2011

Opinion: How to be a modern IT manager

May 18, 2011

The spectre raised by Nicholas Carr in 2003 – that IT doesn’t matter – has risen again, summoned by the two prevailing trends of the day: cloud computing and the consumerisation of IT.

IT managers and CIOs today would do well to read the original Harvard Business Review essay, in which Carr argues that IT is becoming a commodity the same way rail transportation or electric power did. The essay has well-known flaws, the worst of which is Carr’s narrow characterisation of IT as network, compute, and storage infrastructure. But in at least one respect Carr was prescient: The commoditisation of those core infrastructure functions is now taking place.

For an increasing number of workloads, it matters less and less whether you spin up VMs in Amazon’s datacentre or in your own – or even whether you licence applications on premise or rent them from an SaaS provider. Today’s key questions are “How fast can I get it?” and “What’s the TCO?”

At the same time, CIOs and IT managers are under assault from a commoditising force Carr never anticipated: Consumer devices that users bring to work. IT has been forced to accommodate mobile devices tied to commercial networks because smartphones and tablets deliver huge gains in productivity.

Those who try to erect a Maginot line against commoditisation, and insist that all IT from infrastructure to mobile devices must stay under their complete control, hobble their business’ competitiveness and limit their careers. At the same time, no company would tolerate the chaos of lines of business buying and deploying their own technologies without regard to security, integration, or economies of scale.

Finding a middle ground between those extremes is part, but not all, of becoming a modern CIO. We are entering a period of accelerated change, one that includes the break-up of the Windows desktop paradigm. Here is my advice to CIOs, IT managers, CTOs, and other technology leaders:

Become a technology strategist. The era of the CIO who simply “keeps the joint running” is over. Just as good business strategists need to think beyond the next quarter and explore new opportunities, IT leaders need to look for emerging technologies that accelerate innovation, from promising cloud applications to internal app stores to advanced virtualisation management. Standing still isn’t a safe place to be anymore.

Build a service catalogue. Gone are the days when you can simply serve the business stakeholders who bark loudest with one-off, end-to-end infrastructure and apps to meet their needs. Technology leaders need to step up and say: “You want to drive the cost out of operations? Then give me the resources up front to provision shared services and the authority to make every appropriate department use them so I get maximum economies of scale”. Embrace commoditisation when you can and you’ll free up resources.

Cultivate your developers. When infrastructure becomes commoditised, developers are the big winners. Development, test and deployment cycles shorten dramatically, leaving more time for developers to interact with the business, engage in agile practices, and create applications that accelerate business processes. Coming out of a disastrous recession, the number one imperative is to jump on new business opportunities. Create a development culture where you can deliver apps to meet that challenge with all appropriate speed.

Practice postmodern security. Networks are permeable. In fact, most are already infected. The perimeter still needs to be protected, of course, but concentrate your efforts on authentication, access control, encryption and other security technologies that protect data and applications.

Empower your users. In most businesses, the most valuable employees are often the ones who have the initiative to provision their own technology. If they’re not going to wait for IT to build what they want and go to the cloud instead, don’t clamp down; help them find the right providers and create a framework for provisioning instead. Rather than ban mobile devices, create policies that enable people to use them safely – and explore new technologies like mobile client hypervisors.

The truth is that every part of IT matters – but a smooth-running, elastic infrastructure is the new baseline. To stay strategic, CIOs need to drive cost out of infrastructure and shift investment to technology and development that grows the business. And when IT makes users its ally, and shares control over technology, IT isn’t diminished – it just broadens and deepens its integration with business.

Source Opinion: How to be a modern IT manager

Visit us at soon


Offshoring: Preparing for India’s proposed privacy rules

May 17, 2011

The Indian government has finally taken a step toward creating a comprehensive set of data protection rules to safeguard privacy, but the proposed regulations released this spring are likely to have a major impact on the global enterprises doing business with Indian outsourcers.

The draft regulations, which deal with the protection of personal information, are more stringent than either the Gramm-Leach-Bliley Act in the U.S. or the EU Directive in Europe and would create new requirements for companies that outsource to service providers in India or maintain their own operations there, say Miriam H. Wugmeister, partner in the law firm Morrison Foerster and Cynthia J. Rich, senior international policy analyst with the firm.

“Given all the personally identifying information, confidential information, and sensitive data collected by organizations, both purely online and in the course of doing business, it was about time that the Indian government took action to update its policy,” says Tony Filippone, research vice president with outsourcing analyst firm HfS Research. He notes that India’s privacy legislation has remained largely unchanged for more than 100 years.

The entire offshore outsourcing industry has been slow to protect personal data, says David Rutchik, partner in outsourcing consultancy Pace Harmon. Offshore outsourcing companies’ lack of urgency around data protection has created a lot of uncertainty for outsourcing customers. (For more on China’s draft data privacy regulations, read IT Outsourcing in China: What CIOs Need to Know About New Data Privacy Guidelines.)

The new rules are intended to showcase a new commitment by India to rigorously protect data, but they could dampen offshore outsourcing business. Most notably, prior written consent will be required-without exception-to collect and use sensitive data about Indian citizens and about any person who’s personal information is collected within the country.

The specifics and timing of implementation and enforcement have not been clarified-and may not be for some time, “which puts every outsourcing client in limbo in the interim period,” Filippone says. Companies with operations or data in India should take the following seven steps to prepare for possible implications.

1. Review current data protection policies and procedures. What data is being captured and stored in India? What opt-in or opt-out policies are in place? Document all existing internal rules.

2. Create a response team. Identify who would be involved with defining and implementing a response to India’s privacy act once the details are clarified, says Stan Lepeak, director of research in KPMG’s shared services and outsourcing advisory group. Team members might include CIO, legal counsel, outsourcing governance teams, and external consultants.

3. Take a closer look at customer-facing activities in India. Processes like order entry, customer service, collections, and outbound sales will be hardest hit if the new privacy law is enacted. “[Companies] will need to secure prior written consent from customers prior to collecting personal data over the phone, and even then, sensitive personal data won’t be permitted to be shared unless it is deemed necessary,” says Rutchik. “These types of issues may significantly impede an enterprise’s ability to properly and efficiently interact with its customer base.”

4. Consider the impact on IT’s internal customers. Little notification is given to employees regarding collection and use of their personal data, even though systems supporting human resources, payroll, and help desk operations all contain sensitive personal data that could fall under the new privacy regulations. “I doubt every organization makes notifications to employees or writes privacy policies to include employee data so some back office operations are likely exposed to risk under this law,” says Filippone.

5. Get on the same page with providers. Review all data protection policies and procedures in your offshore outsourcing contracts. “Obtain the service provider’s interpretation of the act and have the providers explain how they plan to respond to the act’s requirements,” says Lepeak.

6. Prepare for increased standardization.”With these new regulations in place, offshore providers will likely become more rigid in how they operate and more reluctant to tailor their processes to meet customer needs,” says Rutchik. “These restrictions could, in fact, make offshore providers less attractive as a result.”

7. Protect yourself. IT outsourcing vendors may seek to impose data security obligations on their customers to ensure that the customer complies with Indian law, say Wugmeister and Rich. “The new regulations may begin showing up in offshore outsourcing contracts as enterprises will want to be indemnified from specific actions by offshore providers,” Rutchik says.

Source Offshoring: Preparing for India’s proposed privacy rules

Visit us soon at soon