VMware Experts Program Big Data Day 2
This is Day 2 of the VMware Experts Program Big Data, Scientific & Engineering Workloads held at VMware Corporate Headquarters, Palo Alto, Ca. There is a blog on Day one of this program at:
VMware Experts Program Big Data
NVRAM and Persistent Memory in vSphere
Richard Brunner

The entire presentation was under NDA and I was not able to share the contents of this session in a public forum.
HPC Performance and Customer Examples
Josh Simons

CPU overcommitment can make sense in certain situations. Depending on the job mix.
Hyper-Threading is most useful when an application could stall
In general, we currently recommend to not over commit
The share mechanism only engages when there is contention
The platform has a lot of options, and you have lots of choices on how you implement it
VMware Say leaves cores available for ESXi
Great HPC Links
Efficient?Big–Data Processing and Virtualization
Mellanox
Motti Beck/Liran Liss


Goodput is the effective bandwidth delivered to the application
In an autonomous car environment, you need to analyze the data in real-time.
Machine learning you are keeping what was done in the past and analyzing it later
Deep learning, you are working like the brain. It’s happening in real time
In the car, it?s all happening in real-time
If you are building systems based on a distributed system the network is important
You need a network that is fast enough to process the data
You need no packet loss
Our Approach don?t send it unless you know the other side can receive it
Our Approach offload the CPU as much as possible
Networks matter in a hyper-converged? environment
Big Data on vSAN
Sumit Lahiri, VMware


Modernization of the data center being fueled by HCI
8 Nodes vSAN Cluster Size, 1 #Gateway VMs, 1 # of Master VMs, 10 # of Worker VMs
16 Nodes vSAN Cluster Size, 1 #Gateway VMs, 1 # of Master VMs, 26 # of Worker VMs
32 Nodes vSAN Cluster Size, 1 #Gateway VMs, 1 # of Master VMs, 58 # of Worker VMs
Hadoop on VSAN Deployment Guide
- Disable DRS and HA. When Host goes down, VMs on that host should go down
- Let Hadoop take care of failures when VM?s go down
- Leave about 20% memory to ESXi
All flash vSAN with FTT=1 can fully satisfy Hadoop performance requirement
With FTT=1, Hadoop cluster can survive from
- One Capacity drive failure
- One Disk Group Failure
- One Physical Host Failure
Upon host failure, Hadoop cluster can handle the failure of losing two worker VM?s or one master VM
Mike Corey
** All Post here are mine and not my employers **
LicenseFortress – Real Time Oracle License Compliance Alerting and Management with a Financial Guarantee.
My Blog: http://michaelcorey.com/
My Personal Twitter Account:?Michael_Corey
Columnist for the Big Data Quarterly. <Click Here to Subscribe Big Data Quarterly>
Buy at VMWarePress!
Excellent