2nd half applications for vEXPERT 2020 are open

I have had the honour of being a vExpert since its inception,  I currently have the honour of being a member of three sub-programs:

  • Cloud Provider
  • NSX
  • vEXPERT-PRO

This post is to let all my readers that the second half 2020 applications are now open. As a vExpertPro I will be one of those that vote applications.

So what is the vEXPERT program?

The VMware vExpert program is VMware’s global evangelism and advocacy program. The program was designed to put VMware’s marketing resources towards your advocacy efforts. As a honouree you have access to corporate promotion of your articles via the Adovacy.vmware.com site, exposure at their global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap.  There are many benefits of having membership of the program and some are outlined below.

What are the benefits for being a vExpert?

  • Networking with 2,000 vExperts / Information Sharing
  • Knowledge Expansion on VMware & Partner Technology
  • Opportunity to apply for vExpert BU Lead Subprograms
  • Possible Job Opportunities
  • Direct Access to VMware Business Units via Subprograms
  • Blog Traffic Boost through Advocacy, @vExpert, @VMware, VMware Launch & Announcement Campaigns
  • 1 Year VMware Licenses for Home Labs for almost all Products & Some Partner Products
  • Private VMware & VMware Partner Sessions
  • Gifts from VMware and VMware Partners
  • vExpert Celebration Parties at both VMworld US and VMworld Europe with VMware CEO, Pat Gelsinger
  • VMware Advocacy Platform Invite (share your content to thousands of vExperts & VMware employees who amplify your content via their social channels)
  • Private Slack Channels for vExpert and the BU Lead Subprograms

For those that are thinking of applying; I will say do. However, please take a look at what is below; this is not TLDR:  but it is valuable as it contains information into what we are looking for in an application.

I wrote this post to give a few pointers as to what a Good Application should look like, which will hopefully enable your application to rise to the top of the cream.

Firstly there are four, but in reality three tracks that you can apply for VEXPERT Status: Source

Evangelist Path
This Path includes book authors, bloggers, tool builders, public speakers, VMTN contributors, and any IT professionals who share their knowledge and passion with others.  One thing to note is that if your activities are not in the public domain, ie you are an internal evangelist for your company, It is recommended that a reference from a VMware Employee in support of your application.

Customer Path
The Customer Path is for leaders who evangelize VMware. VMware are looking for passionate customers who do the work to evangelize within their organization or worked with VMware to build success stories, act as customer references, given public interviews, spoken at conferences, or were VMUG leaders. This point is important, the actives must not be related to your position within the company, or in other words you should not being paid for evangelizing VMware. Again a VMware employee reference is recommended if your activities weren’t all in public.

Partner Path
The VMware Partner Network (VPN) Path is for employees of VMware partner companies who lead with passion and by example, who are committed to continuous learning through accreditations and certifications and to making their technical knowledge and expertise available to many. This can take shape of event participation, video, IP generation, as well as public speaking engagements.  This is the most challenging track as it is the most difficult to draw the line between what is just your day-job and what is above and beyond. This is the only track where a VMware Employee reference is a requirement.

VMUG & VCDX
If you are an VCDX or an active VMUG Leader you will qualify for the vExpert award.

It must be noted that the chapter you are claiming leadership off must be active and that you as the VMUG leader must be actively involved in the organization of any and all events.  VMUG status and any activities you are claiming are verified by the voting team. 

Inactive VMUG leaders will not qualify for the award.

All VCDX applicants will be verified via vcdx.vmware.com.

If you are applying via VMUG or VCDX, you should use the evangelist path and state you are a VCDX or VMUG leader and which chapter.

So how should your application look?

The largest number of people who apply for vExpert utilize the Evangelist path and these guidelines are tailored to an Evangelist application but are valid for the other tracks as well

  • Make sure that your content is about VMware. It does not have to be exclusively about VMware but if you are writing most of the time about Veeam, Microsoft or another vendor, then the chances are high that you will not be selected as a vExpert. At least 50% of your content should be about VMware or have a VMware Bias,  I for example write a lot about HashiCorp’s technologies, but about their use and interaction with VMware technologies.
  • Be honest in your vExpert application, The invigilators are also looking at your content as provided in the application and they WILL check the VCDX directory if you have applied for the VCDX path. If you have chosen the wrong path because you are for example not a VCDX it’s up to the invigilator and the power of your remaining content as to whether you get approved.
  • Don’t use a syndicate blog in your application. This is not original content, if you are a team of creators then only highlight your content.  Any content that is not your copywrite is not yours to claim and therefore will not be given any credit.
  • If you are a member of VMTN please provide the correct VMTN username. You can find your VMTN username if you login to VMTN and select “View my profile”. This will create an URL similar to this one https://communities.vmware.com/people/TomHowarth; your user name is the last section

One point to note: only enter your VMTN username if you actually help people on the community. If you are just a lurker and your only post is from 2006, do not bother as there will be no credit for that.

  • If you are adding speaking to your application,  for example (VMworld, VMUG, etc.), please provide the necessary links, agenda, etc. no evidence no credit, in the words of your high school teachers for full marks show your workings.
  • Another thing to note is that the first half vExpert applications take into account content from January to December, whereas second half vExpert applications will concentrate on July to June timelines ie July 2019 through June 2020.  Keep this in mind when you are providing content links.
  • If you are a blogger, please date your posts in whatever way suits you. But provide links.  If the invigilators can’t verify that the content is from the required time period; again no credit.
  • If you are a VMUG leader please provide the public link to your VMUG profile that will show a Leader badge. This will save time for the voters who doesn’t need to search for it.
  • This should go without saying:  Please only mention content or activities which were created or done outside of your daily job business.

Hopefully, you find these pointers will help those who apply for the vExpert program, the pointers are as valid for 1st or 2nd application and for those that people who have been denied in their first try.

Also if you need help for preparing your application, or even general advice about the programme you can always reach out to a vExpert Pro, these can be found vExpert Directory. If there is no vExpert Pro in your country or region you are welcome to contact me or any other Pros in the directory. I think I can say that everyone of them will be more than happy to help with the application or giving some advice for “how to apply”.

So what are you waiting for apply today

VMware announces its intent to buy Kubernetes Security startup Octarine

VMware has been shopping again, this time they’ve put a deposit on a DevSecOps startup called Octarine to fill a monitoring gap in their Kubenetes platform.

VMware has been out shopping in the Silicon Valley Mall again.  They have given notice of their intent to purchase Octarine a small Venture-backed startup based out of Sunnyvale and Tel Aviv.

Octarine

So what to do know about Octarine?

Well to be fair, prior to the announcement the only thing I knew about Octarine was that is it part of Terry Pratchett’s  Discworld lore and is the Color of Magic and the founders may have been fans.

octarine

OK, let’s be serious.  Octarine is actually a small start-up with offices in Sunnyvale and Tel Aviv, that has during its time as an independent raised at least $9m from various Venture funds and other investors who backed their vision of DevSecOps and to provide a continuous security and compliance lifecycle for the protections of Kubernetes deployments from nasty black hat hackers and other nefarious folks of the criminal fraternity.

How do Optarine approach this?

Octarine Secret Sauce

Anyone who has attempted to deploy Kubernetes or even just containers in general know and understand that traditional monitoring and security products do not provide adequate protection for the types of applications deployed on the underlying container hosts, this is not a limitation of Containers or Kubernetes, but the fact that monitoring and compliance tools were not set up to deal with the complexities of containers. Therefore a new approach was needed,  this is not just a benefit of Octarine, other vendors like Amazic’s Sysdig have answered these questions too, and the answer is to bake in security and compliance from the initial build, all the way through to the final deployed runtime, and then to continually improve on the baselines.

Why is VMware interested in Octarine?

But the real question is, why are VMware interested in this particular startup?  This question is in actuality quite a simple one to answer.  The purchase of Octarine allowed VMware to very succinctly fill a glaring gap in their current security product portfolio,  Carbon Black and AppDefence protect VMs and Native containers but their Kubernetes coverage is woefully lacking and this is a massive gap. Actually, it was a glaring 6 lane freeway of a gap for potential vulnerabilities!

Freeway sized gaps

Especially considering their hype and bluster about Kubernetes at their vSphere 7 and Tanzu product launches coupled with their statement about Containers being first-class citizens on their platforms and Kubernetes being the deployment methodology of choice.

The financial figure for the purchase has not been released but in the grand scheme of how these things are counted, I cannot believe the purchase price was significant, not small by any means but not eye-watering painful to the pocket.

As already stated this acquisition neatly fills a gap in their growing security portfolio, and as such VMware intends for this to be combined with their Carbon Black Endpoint protection which they recently acquired for $1.2B and their more generic AppDefense products that protect Virtual machines and containers.  VMware also intends to bring the functionality of the Octarine platform to Tanzu Service Mesh to provide real-time alerts via their network-based IDS to prevent any attempts at breaching microservices.

Octarine’s ability to report on unencrypted connections, internal lateral movements, and many other types of malicious threats will enable Tanzu users to create finely grained and dynamic policies to automatically protect environments by restricting or isolating compromised micro-services thereby alleviating the risk of a cascading failure of your working clusters.

Obvious the ability of Octarine to protect both containers and Virtual machines is another feather in their cap as far as VMware is concerned.  This all ties into Pat Gelsinger’s (CEO VMware) vision which he elucidated on in March at the release of vSphere7 when he stated that [VMware] is “out to change the security Industry, [as] it’s broken and fragmented with too many vendors. We’re going to make it possible for applications to be born secure, live secure, and die secure”.

Summary

Summary

VMware has been very acquisitive in the last year or so. However, unlike the Maritz Period,  Gelsinger has been very focused on redefining VMware as a company.  Moving into new market sectors, taking the Nicira acquisition that could have been a big mistake, as it damaged what was at the time a very good relationship with Cisco, and turning the fledgling Network Services Business Unit into a $2B a year revenue generator from a standing start by FY 2019. To redefining their Cloud business Unit by off-loading vCloud Air to OVH, sell off other none core products, Zimbra anyone? and redefining the redefinition of the Cloud Business unit when he brought back Pivoal in-house.

But the Security division is his baby.  VMware has never been seen as poor on security like Microsoft, but they traditionally relied on Third-Party products to protect their environments and any security products they had brought, we badly integrated into VMware, remember vShield. Over the last 5 years or so they have been quietly building up quite a decent portfolio of products, they now cover a large proportion of infrastructure with services that can easily slot in and are integrated into a common framework.  Octarine is just the latest in a long line of security product acquisitions that is helping to secure VMware’s position as a vendor that cares about physical security.  Pat Gelsinger joined a company that many were writing off as past its best, and in its twilight years, however, during his time at the helm, it can be argued that VMware has never been more relevant.

How simple Terraform plans make hybrid and multi-cloud a reality: an introduction

Most non-IT people will have heard about the word Terraform, will automatically think of this, changing dead planets into Earth like paradises.

Terraforming Mars will be an escapade in automation, just like terraforming your AWS environment.

But for those of us that work in Infrastructure and Cloud; Terraform is a language definition that allows the deployment of infrastructure as code. There is a loose analogy there, with Terraform you are building your environment exactly as you wish with a pre-configured script to create a predefined end point, just like the science fiction future environment building a new earth from barren rock.

Anybody who has utilized AWS will be aware of Terraform. It was written by Hashicorp and it is one of the primary methods used to automatically build AWS environments in this bold and new DevOps world.

What not a lot of people aware of is that Hashicorp’s Terraform can be utilized to build any infrastructure; all it requires is a provider. Currently Terraform is available for integration into all the major public cloud providers (AWS, Azure, GCP, Oracle Cloud and Alibaba Cloud). It is also available for on-prem environments with VMware and Microsoft Hyper-V.

—– Read More —-

DevOps – The Infrastructure Revolution

Remove term: DepOps DepOpsRemove term: HashiCorp HashiCorpRemove term: Puppet Puppet

According to the good and the great, DevOps is the new reality for Operations. I mean everything is now virtual or encapsulated in light-weight containers. It is all about the App! In this article it is intended to have a brief look in to the rise of the movement that is now called DevOps, investigate where it came from, and where it is now and more importantly is it suitable for the future.

The DevOps Revolution

Big Dev, Little Ops

Today our view of DevOps is big DEV and little OPS, or to be more precise, people who have more a history of development than day to day operations, the focus is more on using code to deploy infrastructure, than using code to make day one and day two operations simpler. Focus has moved rapidly from making system administration simpler through orchestration and automation to the point where the concept of a virtual machine, container or application have been distilled down to several lines of code. Great examples of tools to help us do that are Terraform, PowerShell, Perl, Ansible, Chef and Puppet.

———–Read More———–

Previously published on Amazic World.

Are Containers a first class citizen in the enterprise?

VMware have recently finished their annual VMworld conference. One of their major announcements was that of Project Pacific. This is VMware’s biggest vSphere announcement since the the introduction of their ESXi product back in 2007.

What are containers to Project Pacific?

Project Pacific is effectively a complete rewrite of vSphere to become a Kubernetes deployment engine. What this effectively means is that VMware have made Containers first-class citizens on their platform. Yes it is true that VMware has supported containers in the past, firstly with VMware VIC (Virtual Integrated Containers) then later with VMware PKS (Pivotal Container Service), but these have been very much add-ons to their core product, and seen as an adjunct to Virtual Machines rather than as a fully paid up member of the enterprise club.

Project Pacific
,  VMware move to be an Application platform (copyright VMware)

VMware have struggled with Containers, chiefly because at their enterprise tipping point they were seen as the more valid answer to the then current issues that enterprises needed solving over fledgling container technology from Unix Vendors like Sun MicroSystems (bought by Oracle) with their Solaris Zones product.

————-Read More —————-

Well it is time to buckle down and finally attempt VCDX

Time to place a line in the side, no more prevaricating – time to attempt the VCDX

I am putting this out there as a poke and a prod for me to get my self in gear and finally attempt the VCDX-DCV. I am not going to kill myself on this as I have a life and work commitments, but realistically I will attempt to submit for the December 2019 defence dates. so submission in September.

So 5 months to sit both VCAPs exams and write my submission.

You may think that I am myself under significant pressure and I am, but I have prevaricated enough over the years.

This is my line in the Sand.

The Cloud Act and What it means for you, or more importantly, me!

The CLOUD Act, or to give it full nomenclature, the Clarifying Lawful Overseas Use of Data Act, has been passed into law by POTUS 45. This little act has been touted as an update to the ECPA, or Electronic Communications Privacy Act, and ostensibly, this is the case. What is worrying, though, is the way that it has been signed into law as a part of the Omnibus Spending Bill, without the oversight that a base privacy law should have been given. It feels like it has been smuggled through.

The Cloud Act: it’s MAD (Mutually Assured Data Access)
THE CLOUD ACT: IT’S MAD (MUTUAL ASSURED DATA ACCESS)

This is an act that has been praised by technology companies. The below is an outtake from a joint letter from Apple, Google, Facebook, Microsoft, and Oath (the new name for Yahoo).

The new Clarifying Lawful Overseas Use of Data (CLOUD) Act reflects a growing consensus in favor of protecting Internet users around the world and provides a logical solution for governing cross-border access to data. Introduction of this bipartisan legislation is an important step toward enhancing and protecting individual privacy rights, reducing international conflicts of law and keeping us all safer.

And vilified by privacy and civil rights organizations. This is an outtake of what the ACLU thinks of the law.

The CLOUD Act represents a major change in the law — and a major threat to our freedoms. Congress should not try to sneak it by the American people by hiding it inside of a giant spending bill. There has not been even one minute devoted to considering amendments to this proposal. Congress should robustly debate this bill and take steps to fix its many flaws, instead of trying to pull a fast one on the American people.

The Electronic Frontier Foundation also had a list of objections:

  • Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.
  • Fails to require foreign law enforcement to seek individualized and prior judicial review.
  • Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.
  • Fails to place adequate limits on the category and severity of crimes for this type of agreement.
  • Fails to require notice on any level – to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)
  • The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations. But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation.

It seems that there are two sides to this story, and they are diametrically opposed. Why would the technology companies be on one side of the fence, and the civil rights organisations on the other? Especially considering Google’s mantra of “Do no Evil.” The wordings of legal documents often cause this type of result. Their intention is to be clear and leave little to no wriggle room for interpretation, but as you can see, the act has been read completely differently.

This post was previously published on http://www.tvpstrategy.com

—– Read More —–

Is Traditional IaaS Cloud a Dead Man Walking

Traditional IaaS cloud—whether AWS’s EC2, Azure’s offering, or even a private IaaS cloud running vCloud Director, vRA, or OpenStack, to name a few—is in trouble. Now, that sounds like quite a contentious statement to make, but I feel the writing is on the wall. “What?” you may ask. “How can you say that? There are many companies that have not even started their cloud journey, and surely IaaS is the first baby step in their travails.” Well, the answer to this is “yes and no.”

Early movers headed out on their journey unprepared, bright-eyed and bushy tailed, walking into their cloud migrations thinking only of up-front cost savings and believing the patter of the snake-oil salesmen. What is worrying is that, according to an IDG and Datalink survey in 2016, up to 40% of those early adopters have had buyer’s remorse and returned to their cozy data centers or colo sites. Why? Traditional IaaS is expensive. Moving to an infrastructure only–based cloud is very expensive, and companies are used to being always on. They are comfortable with instant access to their data from anyplace, at any time, from effectively anywhere. You really can not move to a subscription-based cost model on that basis.

Previously Published on TVP Strategy (The Virtualization Practice)

 

—– Read More —–

PERTH IS LOVELY TO VISIT, BUT IT’S NOT CLOUDY: SD-WAN TO THE RESCU

On February 19, my colleague Edward Haletky wrote a piece on scale. In it, he highlights that scale is not just about 20,000 desktops and 3,000 virtual hosts. Rather, there are many other metrics that could and should be considered with regard to scale.

I am currently living in Perth in Western Australia. Perth holds a rather dubious record in that is it is the most remote capital city in the world. “Wait, Canberra is the capital of Australia,” you might say, and you would be correct. However, Australia operates in a federal manner and is made up of states and territories, and Perth is the capital of Western Australia. Why am I saying all this? One word, really: cloud. Living in Perth, our nearest AWSAzure, and GCP zones are in Sydney, 3,300 kilometers (2,000 miles) away on the east coast. Oracle Cloud? Again, Sydney. OVH? Yes, Sydney. Softlayer? Wait, it has a zone in Melbourne, but that is still 2,700 kilometers (1,700 miles) from Perth. As you can see, we are quite isolated. Physics rather than doctrine limits Perth’s access to public cloud.

Previously Published on TVP Strategy (The Virtualization Practice)

—– Read More —–

PURE STORAGE DOUBLES DOWN ON VVOLS AND A FEW OTHER THINGS

For a long time, VVols have appeared to be a solution looking for a problem. For the uninitiated, we will first give a brief outline of what VVols are and identify the problem that they purport to solve. On the face of it, it is nothing more than the ability to do one VM to one datastore. However, it is much more than that. VVols are the logical extension of this paradigm in a modern environment. VVols allow for policy-based metrics to be applied to individual virtual machines rather than at a datastore level. Why could this not be done with traditional datastores? Quite simply, the ESXi is limited to 256 LUNs per host. Now, this might sound like a lot, but consider that this would limit you to 256 guests per cluster if you wished to utilize vMotion or Storage vMotion. Not exactly optimal.

Previously Published on TVP STrategy (The Virtualization Practice)

—– Read More —–