Report finds SQL Server on Azure is Faster Than AWS

A new research report has shown that SQL Server runs faster in Micrososft’s Azure cloud than it does on Amazon Web Services (AWS).
GigaOm, a research firm sponsored by Microsoft, compared throughput performance of SQL Server on Azure Virtual machines and SQL Server in AWS EC2. The results showed that the former performed significantly better.
“Azure emerged the clear leader across both Windows & Linux for mission-critical workloads. It was up to 3.4x faster and up to 87 per cent less expensive than AWS EC2,” commented Microsoft in a blog post Dec. 2. Microsoft chose the competitors, the test and the Microsoft configuration for this sponsored report. The GigaOm Transactional Field Test was based on the TPC Benchmark E (TPC–E) industry standard.
After conducting a number test, GigaOm concluded that Microsoft SQL Server on Microsoft Azure Virtual Machines showed 3.4x higher performance on Windows than Microsoft SQL Server (AWS) Elastic Compute (EC2). When tested on Linux Server OS, Microsoft SQL Server on Microsoft Azure virtual Machines performed 3x better than AWS. SQL Server on Microsoft Azure Virtual Machines (VM), had up to 86.8 per cent better price-performance compared to AWS License Mobility for three year compute commitment and up to 32.2 per cent better price-performance compared to the high-speed disks AWS io1 or Azure Ultra Disk.
[Click on the image to see a larger view.] These images are GigaOm comparisons of performance and price-performance. Performance is measured in throughput (transactions per seconds, tps); higher performance means better. The price-performance metric measures three-year pricing divided with throughput (transactions/second, tps); lower price-performance is better. (source: Microsoft). Microsoft commented on the tests and stated that Azure BlobCache, which offers free reads, is a key reason why Azure’s price-performance is better than AWS. This allows customers to save significant amounts of money, as most online transaction processing (OLTP), workloads come with a ten percent read-to–write ratio.

Report: AWS is the winner as VMware Workloads move to cloud The vast majority of businesses that run VMware environments plan to move at least some of their environments to the cloud by next-year. Nearly half of them will use Amazon Web Services (AWS). Druva, a provider cloud data protection solutions, found that this is the conclusion of a recent survey. For its 2017 VMware Cloud Migration Survey, Druva polled 443 VMware professionals and found that 90% of them expect to have VMware environments in the cloud by 2018. Nearly half (48%) of those surveyed cited AWS as their preferred platform to move to. Microsoft Azure was cited by 25 percent of respondents as the second-most popular cloud platform. Druva says that VMware’s partnership with AWS is a key factor in the widespread shift to the cloud. The two companies announced a joint initiative last October to offer VMware solutions on AWS as part of a new offering called “VMware Cloud On AWS”. According to some reports, the two companies are also working on a separate project that would allow enterprises to run AWS from their private datacenters. This would put AWS and VMware in direct competition with Microsoft and its Azure Stack offering. According to the report, “Since October 2016, when VMware announced its partnership (with Amazon Web Services)), organizations have been looking at the cloud to provide consistent functionality while enabling Cloud workload mobility with products such as vMotion so application resources can stay where they make most sense for their business.” Organizations want to take advantage of the cloud and still use the skills of their existing VMware administrators. The cloud is a great option to manage VMware environment workloads. Consider the impact downtime and poor application performance can have on user productivity. Despite the appeal of the cloud, most organizations will continue to keep some workloads on-premises. In fact, over three quarters (78%) of respondents to the survey said they intend to use a hybrid approach. Other findings from the survey:

  • Disaster recovery applications were the most popular among those who have started their cloud migrations. They accounted for 31 percent.
  • Nearly two-thirds (63%) of respondents said they are thinking about re-architecting applications for the cloud. According to the report, “There is a growing desire for cloud-based applications to augment the VMware environments on public clouds platforms” as a result of the increasing complexity and number of data centers.
  • Just over a third (38%) of respondents cited cost as a major reason for migrating their workloads to cloud computing. Druva stated that cost was not a major factor in the decision of a large portion of survey respondents. “Our survey shows that such migrations result from a strong business need.”

Register to access Druva’s complete report.

Report: AWS Makes $1B in New Cloud Deals

Amazon Web Services’ dominance in the cloud market will only grow as it reportedly inks major deals with Symantec and SAP that are estimated to be worth $1 billion.
Bloomberg reported the deals this week, citing an AWS memo. Bloomberg’s article was denied by Symantec, SAP, and AWS.
Both Symantec and SAP have partnered with AWS in different capacities. AWS and SAP have been steadily increasing interoperability of cloud products, while Symantec has tapped AWS for the “vast majority” of its cloud workloads.
Bloomberg reports that the two companies have increased their partnership with AWS to $500 million each over five year, or $200 million per year.
According to SAP, the contract will focus on storage, data management, and compute offerings as well as artificial Intelligence (AI) or Internet of Things (IoT).
Bloomberg reported that Symantec has agreed to move its “Managed Security Service”, Network Protection, and Website Security Service products to the AWS cloud in the meantime. Symantec would increase its AWS investments by more than sevenfold with the agreement.
Bloomberg reported that AWS had apparently defeated Microsoft for the Symantec contract. Microsoft Azure platform is AWS’ closest competitor, if not the most distant, in the public cloud space.
The news of the Symantec and SAP partnerships comes as AWS appears to be on the verge of securing yet another blockbuster contract: a 10-year deal worth $10 billion to operate the U.S. Department of Defense’s Joint Enterprise Defense Infrastructure (JEDI).
The bidding process for JEDI has been controversial. Oracle and IBM protested the contract’s single-vendor nature which they claim favors AWS.

Report: AWS Performance Predictability Related to Internet Reliance ThousandEyes published cloud performance research which cites AWS’s dependence on the public Internet, rather than its own backbone network, as contributing to operational risks that can affect performance predictability. The San Franciso-based firm, which calls itself an Internet and cloud intelligence business, published its second annual Cloud Performance Benchmark this month. It compares global network performance as well as connectivity differences between five major public cloud providers: Amazon Web Services, Google Cloud Platform (GCP), Microsoft Azure (Microsoft Azure), Alibaba Cloud, and IBM Cloud. [Click on the image to see a larger view.] Cloud Connectivity falls into two camps. (source:ThousandEyes). The company stated in a press release that some cloud providers rely heavily upon the public Internet to transport traffic rather than their backbones. This can impact performance predictability. AWS and Alibaba Cloud, while Google Cloud and Azure heavily rely on their private backbone networks for transporting customer traffic, protecting it against performance variations associated with delivering over public Internet, rely heavily upon the public internet for most transport. This can lead to greater operational risk and impact on performance predictability. IBM uses a hybrid approach that is specific to each region. In a blog post, the company explained this finding and advised customers to pay attention. It stated that although the cloud providers tested generally had comparable performance in terms of bi-directional network latency and architecture, differences in connectivity can have an impact on traffic between users or certain cloud hosting regions. “For example, Azure and GCP use their backbones extensively to carry user-to-hosting-region traffic. AWS and Alibaba rely heavily on the Internet to transport user traffic, while IBM uses a hybrid approach. Exposure to the Internet increases uncertainty in performance, creates risk to cloud strategies, and raises operational complexity. Enterprises planning public cloud connectivity should consider the tolerance of their organization for the unpredictable nature. The report also questioned the AWS Global Accelerator, described by AWS as “a service that improves availability and performance of applications with local and global users.” AWS stated that it provides static IP addresses as a fixed entry point to AWS Regions application endpoints. ThousandEyes stated that “AWS Global Accelerator does not always outperform the Internet.” The Global Accelerator follows an optimized route through AWS’ densely connected backbone. However, performance improvements are not uniform across the globe. While the Global Accelerator is more performant than the Internet connectivity path in many cases, there are still instances of performance improvements that are negligible or worse than default AWS connectivity. Enterprises should evaluate the performance gains of the Global Accelerator before deciding on a strategy to maximize their ROI. ThousandEyes also listed other findings in the report:

  • Cloud performance anomalies can be significant depending on the provider, the hosting region and the user location.
  • All cloud providers, including Alibaba pay a performance fee when they cross the Great Firewall of China.
  • Cloud performance is affected by the choice of US broadband ISPs.

The company commented further on the AWS cloud and stated that “The major public cloud providers (hopefully) are continuously optimizing their networks to (hopefully] improve performance and stability for customers.”

Report: AWS Mulls Entering Enterprise Networking Space

Amazon Web Services Inc. (AWS), is reportedly looking to enter the enterprise networking market.
The report, which was published Friday, showed a decline in stock values at networking heavyweights Cisco Systems, Juniper Networks, and generated numerous reports that speculated on the potential disruption to the industry.
The Information cited unnamed sources and reported that “Amazon Web Services Targets Cisco In Networking” (requis subscription). This immediately led to a drop in the price of Cisco stock as well as Juniper. Arista was also likely to be affected.
According to the report, AWS would leverage commodity, “white-box” hardware and open source software. These are key tenets in the software-defined networking movement (SDN), which has also shaken up an industry that was long dominated by Cisco.
Cisco did however jump on the SDN bandwagon, and it enacted other modern networking initiatives in order to stay relevant in new-age network. However, it may have had a harder time dealing with AWS, a cloud giant.
The report stated that Amazon plans to price its white-box switches at a lower price than competitors, as it does in many other categories. According to one person familiar with the program, white-box switches could be priced between 70 percent and 80 per cent lower than comparable switches from Cisco.
AWS would connect open-source software to its cloud services. This would allow it to leverage white-box hardware and open-source software. These services include storage and servers.
The Information stated that Amazon had used white-box switches similar to those found in its data centers for some time. This is a standard practice among large technology companies. According to a person familiar with the project, AWS expects to launch the networking switches for outsiders in the next 18-months. According to the person, AWS is currently working with white-box manufacturers like Celestica Networks, Edgecore Networks, and Delta Networks on the switches and is open to collaboration with other companies.
Although Amazon didn’t comment on this report, Cisco offered a stock PR response quoting their leadership. However, other companies, such as SDN specialist Apstra and intent-based networking specialist Apstra, did weigh in.
“The combination of white box hardware and open-source software allows companies to lower the cost of building their network, said Mansour Karam (CEO of Apstra), a startup that sells software to manage networks with devices from multiple suppliers.” The report said. “It’s not surprising that Amazon would want participate in, and control, the datacenter as the onramp for their cloud services,” said Mr. Karam. Karam.”

Domain 2 of AZ204: Create for Azure Storage

Table of Contents
Types of Azure StorageDomain 2: Develop for Azure storageDevelop solution that uses Cosmos DB storageDevelop solution that uses blob storage
Azure Storage is a cloud-based platform that supports modern applications. We all know that data volume is increasing every day. Therefore, we must upgrade our storage solutions. Azure Storage is a cloud-based storage solution that allows you to store large amounts of data objects in a highly scalable and scalable manner. It provides massive security, durability and accessibility, as well as other benefits. Client applications and users can access Azure Storage data items anywhere in the world via HTTP or HTTPS.
Types of Azure Storage
Azure Storage can be expanded with additional disk storage. There are five main components to Azure Storage:
Azure Blob Storage: It stores large amounts of unstructured information. Storage (BLOBs) is used for large binary objects.
Azure table storage: This feature has been added to Azure Cosmos DB. Azure tables are used to store structured NoSQL data.
Azure file storage: This is a fully managed file sharing service that runs under the Server Message Block protocol. It can be used on-premise or in the cloud.
Azure queue storage: This is a message storage service you can access via HTTP or HTTPS from anywhere on the planet.
Disk storage: There’s a choice of two types of virtual hard drives (VHDs), managed and unmanaged.
Azure Developer AZ204 certification covers five domains.

Domain 1: Develop Azure compute solution (25-30%)
Domain 2: Develop for Azure storage (15-20%)
Domain 3: Implement Azure security (20-25%)
Domain 4: Monitor, troubleshoot and optimize Azure solutions (15-20%)
Domain 5: Connect to Azure services and third party services (15-20%)
This article will focus on the second Domain of AZ-204: “Develop for Azure storage.”
Domain 2 of AZ204: Create for Azure storage
Storage is an essential component of Microsoft Azure apps. This course will teach you how to use it effectively. This domain has a weight of 15-20%. This domain will teach Azure developers how to write code, optimize database consumption, and create, read, update and delete tables automatically. This course will teach you how to create a storage container in Azure. This domain will teach you how to create storage containers in Azure using the Cosmos DB storage and blob storage systems.
The AZ-204 certification exam, “Develop Azure storage”, has the second domain. It includes the following subtopics.
1. Cosmos DB storage is a premium storage solution from Azure. It allows you to develop solutions that use Cosmos DB storage. Cosmos DB is a global distributed, low-latency and highly responsive database that can be used worldwide. This section will teach you how to select the right API for your project, and how to interact with data using the correct SDK. Learn how to create Cosmos DB containers, populate them with data, and how to populate them. Next, you will learn how to optimize throughput and partitioning for maximum performance. This section will also teach you how to select the right consistency level for your operations. You’ll also learn how to use stored procedures, triggers, and change feed notifications for server-side handling.
Azure Cosmos DB
Azure Cosmos DB lets you run a distributed NoSQL data base with excellent throughput and low latency. It is different from traditional relational databases that have a set number of columns and require each row to follow the table’s scheme. It allows you to manage your data regardless of whether it is stored in different data centers around the world. It can handle multiple data models, including key-value, relational and graph.
2. Create solutions that use blob storageAzure Blob storage is layered, highly accessible storage for structured data

Domain 2: Cloud Data Security

The second domain of the CCSP is ‘Cloud Data security. This vast domain tests the candidate’s technical knowledge of:
There are many phases to the cloud data life cycle
Cloud data storage architecture, including storage types, security threats and controls
Data security strategies and other objectives
This domain of the CCSP certification carries 20% importance in the exam. The sub-objectives for the CCSP Domain 2 – Cloud Data security are:
Understanding the Cloud Data Life Cycle (Cloud Security Alliance guidance). The exam requires that the candidate has a thorough understanding of all stages of the cloud data cycle, including creating, storing and using, sharing, archiving and destroying. The candidate must also be able to understand the security controls and risks associated with each stage in the cloud data cycle. For example, how to upload data securely while performing the “create” phase.
Designing and Implementing Cloud Data Storage Architectures Subscribers can access and use the cloud infrastructure, including shared resources, storage, servers and hard drives, on a need basis. They also pay for the services. Cloud software, also known as SaaS, is a subscription-based service that allows users to access the various features of the software from anywhere and anytime. SaaS allows you to work with cloud applications via an API. SaaS is a way to work with applications on the cloud via an API. (Certified Cloud Security Professional).
Designing and applying effective data security strategiesThe next sub-objective tests the certification seeker’s knowledge about designing and planning data security strategies such as encryption, key management, masking, and tokenization. This domain requires that the candidate understands how to apply technologies such as cloud storage time and duration, masking and tokenization, and the design and application of new cloud technologies like homomorphic encryption which can process encrypted data without decrypting.
Understanding and Implementing Data Discovery & Classification Technologies. Candidates are expected to understand and apply different data discovery and classification technologies in context to the next objective. Data discovery methods that are widely used include metadata based, label based, and content based data discovery. Once data is discovered, it must be classified. Candidates should be able to comprehend classification technologies like encryption and DLP (data loss prevention or data leak protection).

Designing and Implementing Relevant Juisdictional Data Protections For Personally Identifiable Information

Domain 2: Asset Security

Google defines an asset as “a useful or valuable thing, person or thing”. This means that assets in an organization could be information, equipment, or facilities that have great worth. The second domain of CISSP exam focuses on protecting assets. The following sections are covered by ‘Asset Security’:
Identify and classify information assets
Different types of information include financial details, password files and credit card information. Some information can be seen by everyone, but some information must be classified to ensure that only those with the appropriate clearance can see it.
Organizations can achieve their core Information security goals of confidentiality and integrity through classification. Before classifying data, security professionals must determine:
Who has access to the data
How data security is achieved
How long the data will remain stored
What method should be used to dispose off the data?
Do the data have to be encrypted?
What is the right use of data?
Data classification differs between the government/military and the commercial sectors. Below is an example of a commercial sector classification:
Private (Private data): Information such as bank account numbers, social security numbers, and bank account numbers.
The company restricted(Information that can be viewed only by a small group of employees)
Company confidential(Information that can be viewed by all employees but not for public use)
Public Information (Information that is accessible to all)
Below is a list of military data classifications:
Top Secret
Secret
Confidential
Sensitive but unclassified or SBU
Unclassified (Reference: https://resources.infosecinstitute.com/cissp-domain-2-asset-security/)
Protect your privacy
Social media is the age of data privacy. Information is all around us and it is critical to decide whether we want to use, retain, or destroy them.

Data privacy has a history that dates back to the 1300s. It has evolved over time in two major worlds, the US and the EU. The European Union’s data protection directive was revised in 2012 by strengthening its data protection rules. These are the key points of the new rules.
Personal data collection should be limited to the essentials
By removing administrative obstacles, the EU’s Single Market dimension should be strengthened
Protect personal data that law enforcement has collected
Data transfers outside of the EU require streamlined procedures
As a follow-up to the previous point, the EU has made clear that data that travels beyond the EU must be protected. The US approach to data privacy is slightly different than that of the EU. Both countries value data privacy to the core. However, their approaches to it are very different. They have created the “Safe Harbor” framework. The US Department of Commerce developed the “Safe Harbor” program in collaboration with Federal Data Protection and Information Commissioner of Switzerland.
One of the benefits of the “Safe Harbor” program is that only US-based organizations can receive data from EU. Other regulations and rules ensure privacy for personal data.
Ensure appropriate asset retention
Data retention policies are the guidelines for how data is stored, retained, and destroyed. It is recommended that all stakeholders be involved in asset retention policies in order to ensure data retention. The following eight steps govern the retention of assets and data.
Understanding the business needs of your organization
Classify data
Determine retention periods
draft record retention policies
Justify the record retenti

Domain 2: Asset Security – Weightage 10% 2018

The Asset Security (Protecting Security of Assets), domain focuses on data classification, labelling, retention, ownership, and clearances. It covers the different storage devices and controls, as well as their determination, including standards, scoping and tailoring. Every organization must have data protection skills.
This domain is responsible for the day-to-day management and management of access control.
Formal access approval and the need to know. The data is classified in Government and Military.
Unclassified, sensitive, but not classified, Confidential and Secret. Data in the Private Sector is classified as Public or Company Classified, Company Restricted and Private, Confidential, Sensitive, and Private.
Next, it discusses information security roles and their responsibilities. These include mission or business owners, data owners and system owners, custodians and users, as well as data owners and system owners. It also discusses data remanence, which is data that remains after non-invasive means of deleting it.
It then covers memory types such as RAM, RAM, ROM and DRAM as well as Firmware and Solid state drives. This section also covers data destruction methods to avoid dumpster diving, such as overwriting, destruction, destruction, and shredding.
By exposing magnetic media to strong magnetic fields, degaussing damages the integrity of magnetic media, such as tapes or disk drives. Destruction is a physical destruction of media’s integrity by destroying or damaging the media itself, such the platters on a disk drive. The act of making data on hard copy unrecoverable is called shredding. Protecting data is vital for any organization, whether it is in motion or at rest.
This domain also includes data security controls like certification and accreditation. These standards and control frameworks also include PCI-DSS and Octave, ISO 17799, ISO 27000 Series, COBIT and ITIL.
Scoping, which is the process by which an organization determines which parts of a standard are to be used, and Tailoring, which is the process by which an organization customizes a standard for its use, play an important role.

Domain 2: Asset Security (Weightage 10%)

The Asset Security (Protecting Security of Assets), domain focuses on data classification, labelling, retention, ownership, and clearances. It covers the different storage devices and controls, as well as their determination, including standards, scoping and tailoring. Every organization must have data protection skills.
This domain includes day-to-day access control management. It requires management of labels and clearances. Data classified in Government and Military as Unclassified or sensitive but not unclassified, Confidential Secret, Secret, Top Secret. Data in the Private Sector is classified as Public or Company Classified, Company Restricted and Private, Confidential, Sensitive, and
Next, it discusses information security roles and their responsibilities. These include mission or business owners, data owners and system owners, custodians and users, as well as data owners and system owners. It also discusses data remanence, which is data that remains after noninvasive means of deleting it.
It then covers memory types such as RAM, RAM, ROM and DRAM, as well as Firmware and Solid state drives. This section also covers data destruction methods to avoid dumpster diving, such as overwriting, destruction, shredding, degaussing and destruction. By exposing magnetic media to strong magnetic fields, degaussing damages the integrity of magnetic media, such as tapes or disk drives. Destruction is a physical destruction of media’s integrity by destroying or damaging the media itself, such the platters on a disk drive. The act of making data on hard copy unrecoverable is called shredding. Protecting data is vital for any organization, whether it is in motion or at rest.
This domain also includes data security controls like certification and accreditation. These standards and control frameworks also include PCI-DSS and Octave, ISO 17799, ISO 27000 Series, COBIT and ITIL.
Scoping, which is the process by which an organization determines which parts of a standard are to be used, and Tailoring, which is the process by which an organization customizes a standard for its use, play an important role.