The world is abuzz with rhetoric about artificial intelligence and machine learning. These terms appear to be used interchangeably, and the perception that they are both the same side of the coin can lead to confusion. So, what are the differences?
First, let’s consider what AI is not. It is not Skynet (yet), and it is not HAL 9000 (yet), although sometimes IBM Watson appears to be getting there.
Will you take the Red Pill or the Blue PillIn the broader sense of the term, artificial intelligence is the concept of computers dealing with situations related to data and figuring out for themselves the best way to do something or improving on a method for undertaking a task. Machine learning is the current top of the pile in AI techniques.
So, basically, AI is an all-encompassing term for algorithms that look at data. However, this is too simplistic an idea.
Previously Published on TVP Strategy (The Virtualization Practice)
In 2002, Defense Secretary Donald Rumsfeld gave a speech about a lack of evidence linking the government of Iraq with the supply of weapons of mass destruction to terrorist groups. This speech was remarkable for one thing only, that being the inclusion of the phase “known knowns, unknown knowns, and unknown unknowns.” These concepts finally entered common parlance. True, those in the security arena, both physical and logical, already knew and understood the terms, but now laypeople did as well.
Let me explain myself. In the IT security world, people concern themselves with known knowns, known unknowns, and unknown unknowns all the time, and each area has its security tool of choice. For example, known knowns—worms, viruses, Trojans, and other malware and vulnerabilities we are aware of—are dealt with by firewalls, IPSes, IDSes, and antivirus software. The rules of firewalls and IDS and IPS products, coupled with the signatures of antivirus tools, deal with those issues that are known. For example, firewall rules allow only the traffic that is allowed to travel to navigate the network, and antivirus rules look for particular code patterns and vaccinate and protect against them. Known unknowns are dealt with by heuristic scanning and education. It is the altogether more difficult unknown unknowns that give IT security professionals sleepless nights.
Previously published on TVP Strategy (Virtualization Practice)
Microsoft has just wrapped up its MS Ignite conference in Atlanta. MS Ignite, which morphed from Microsoft’s TechEd conference, is the conference at which Microsoft traditionally announces and GAs its newest products and delivers its technical strategy announcements. The latest conference has not been a disappointment. This year, as expected for a tech conference, it is all about cloud, cloud, and more cloud, with a smattering of AI thrown in.
First, let’s look at the easier of the two purchases to understand, Cavium’s acquisition of QLogic. Cavium is most likely one of those companies whose product you are using all the time without knowing it. For example, if you own certain Linksys devices or your company runs a Citrix MPX or a Blue Coat packet shaper, you are using Cavium technology.
There can be no real arguing against the fact that Amazon Web Services reigns supreme with regard to public cloud. Its recently announced quarterly results show that AWS is not only gaining revenue, but actually making a “small” surplus. OK, maybe not so small: a tad over half a billion dollars, compared to a $57 million loss for the same quarter in 2015.
What I have found interesting whilst watching it grow is how much like VMware it has become. I can hear you all saying, “It is nothing like VMware.” But please hear me out. AWS’s growth cycle is very similar. Why do I say this?
AWS has become the de facto leader in public cloud in a manner similar to the way VMware dominated the on-premises data center after 2004. Like VMware, AWS has delivered on the early mover advantage. This does not mean that it will continue to dominate, but more on that later.
We are approaching the countdown to the release of Microsoft’s latest operating system, Windows 2016. It is estimated that it will be released sometime during Q3 of this year, most likely early September. We’ve already seen Technical Previews One through Five, each enhancing the previous one and introducing new features.
These features have tempted and beguiled. Storage Spaces Direct—is this Microsoft’s VSAN killer? It appears to fill the same function, and coupled with another new feature, Nano Server running the Hyper-V role, it makes a powerful hyperconverged infrastructure play. And what about the newly displayed Windows Containers from Technical Preview Four—are these Microsoft’s Docker killer, especially given that they can run Docker containers natively on Windows? These, among other new and enhanced features, make for a compelling release. I can not remember ever seeing so many truly new features on a new release.
Part 2a of this series concentrated on Hyper-V 2012 R2 and 2016 as well as vSphere 6.0 regarding the addition of a local distributed storage solution: DataCore Virtual SAN in the case of Hyper-V 2012 R2, Storage Spaces Direct with Hyper-V 2016, and VSAN 6.2 with vSphere 6.0. You can review that article here.
This article continues from that second article of the series and finishes the addition of a local distributed storage stack to XenServer and RHEV. Once again, our compute unit of choice is the Dell 730xd with two 10-core CPUs and 256 GB of RAM. As stated in the previous post, we need to add some local storage in each node. These compute nodes can, depending on the choices made during the configuration, take up to twenty-four disk drives. For the purposes of this article, we are assuming that data locality is required for performance and that there is a need for an all-flash array. We chose to go with two 400 GB SLC drives for cache and four 800 MLC drives for capacity, giving a total raw capacity per node of 4 TB. There may be further hardware requirements depending on the chosen solutions for each hypervisor, but that will be called out in the relevant vendor sections.
This post will take that original premise and expand it to include storage with a view to moving the entire environment toward a software-defined data center.
Once again, our compute unit of choice is the Dell 730xd with two 10-core CPUs and 256 GB of RAM. Now, we need to add some local storage in each node. This compute node can, depending on the choices made during the configuration, take up to twenty-four disk drives. For the purposes of this article, we assume that data locality is required for performance, and that there is a need for an all-flash array. We have chosen to go with two 400 GB SLC drives for cache and four 800 MLC drives for capacity. This means that there is a total raw capacity per node of 4 TB. There may be a requirement for further hardware, depending on the chosen solutions for each hypervisor, but that will be called out in the relevant vendor section. Due to the length of this article, we have split it into two sections. This post deals with the costs surrounding vSphere and Hyper-V.
Over the last couple of weeks, I have been thinking about costs relating to a building a new virtualization-based data center. “What?” I hear you say. “Everywhere is virtualized—there is no such thing as a greenfield site anymore!” I would have said that myself, but in the last month I have come across three, one of which is a company worth over a billion pounds.
During a conversation I had with that company, they informed me that they were going to use a certain vendor for their hypervisor, because it was cheaper. This got me thinking: how much cheaper is it, really? As a result, this is the first in a series of articles looking at a generic cost breakdown for a general-purpose virtualization infrastructure.
Mark November 3, 2014, in your calendar as a red-letter day and living proof that leopards can change their spots. On this day, Microsoft changed the terms of Windows licensing for its flagship desktop operating system, Windows Enterprise. In an update to the terms and conditions of its Enterprise edition, Microsoft now offers the option to purchase Windows desktop operating systems on a per-user basis as well as a per-device basis, thereby opening up BYOD (bring your own device). Even more amazing, this user-based license negates the hated VDA (Windows Virtual Desktop Access).