At SAP’s annual SAPPHIRE NOW conference last week in Orlando, we had the chance to talk with a lot of people about their SAP Big Data and Private Cloud Journey. And, while many of us are still humming “It’s My Life” after Jon Bon Jovi’s concert, another thing that has really stuck with us was the positive feedback from customers about their business transformation experiences with converged infrastructure. We also noticed a big increase in the receptiveness of those not yet using converged infrastructure to learn more about how they can take advantage of its benefits. As what we believe to be further validation that converged infrastructure is the best solution available for virtualizing SAP, Forrester Consulting shows the true economic impact of Vblock™ Systems in the March 2014 commissioned study entitled “The Total Economic Impact™ Of Converging SAP Landscapes On Vblock™ Systems,” which was conducted by Forrester Consulting on behalf of VCE.“Enterprises are always looking to do more with less – especially when it comes to IT resources.ShareI had the special opportunity to meet with a number of chief systems architects, all discussing (to use SAP’s ASAP terminology) their current Business Blueprint and plans for Realization, including SAP HANA® on the SAP roadmap. A common theme was the amount of time and attention needed to get optimal performance out of their current hardware investments in HP, IBM and NetApp.Many were happily surprised to learn that SAP’s Tailored Datacenter Integration (TDI) program allows SAP HANA customers to leverage existing hardware and infrastructure components for their SAP HANA environment. We discussed how VCE, through SAP’s TDI program, enables customers to introduce a Vblock System as part of an infrastructure upgrade or consolidation to ensure the line of business has everything it needs to get started with the Realization step in the SAP roadmap, including the correct number of virtual machines and a SAP HANA system within the confines of a single Vblock System.In the study mentioned above, Forrester Consulting interviewed a number of our customers running virtualized SAP to determine if, in fact, the Vblock System is the most cost-effective option. The Forrester Total Economic Impact (TEI) study, commissioned by VCE, spoke with large SAP footprint organizations (more than five million SAPs on a single Vblock System) to get data about costs, benefits and risk.“Click to Tweet: Learn why @VCE is a cost-effective option for SAP HANA from @Forrester bit.ly/1lcKOLn more in VCE’s blog bit.ly/1quzMBzShareThe findings were impressive and really captured the value of true converged infrastructure. To start, implementation costs were dramatically reduced. VCE was able to implement a full solution in half the time with two-thirds as many internal resources. Once the Vblock System was in place, productivity increased up to 20 percent due to more responsive systems, faster provisioning times and automatic updating.While these savings are impressive, Forrester found the biggest benefits to be on the operational side. In one instance, a company was able to save $20 million in annual outsourcing contracts. In the first year of operation, the team was able to reallocate 30 percent of its resources to other value add IT activities. By year two, the benefit increased to 40 percent. In another instance, a company in the midst of a hiring freeze, and had reduced staff by 5 percent, saw its remaining IT team members able to keep productivity levels up with the help of the high performance system.Following interviews with VCE customers, Forrester used its findings to predict what an organization of 15,000 employees and 5,000 SAP users would experience if it were to move its SAP operations onto a Vblock System. According to Forrester, benefits would include:A payback period of less than a yearNet present value (NPV) of $5.7 million ($1,135 per user)40 percent productivity improvement in IT operations and 20 percent improvement in SAP developer outputA risk-adjusted ROI of 85 percent, and IRR of 122 percentBenefits worth $14.4 millionThese types of savings reaffirm that the simplified way to consume IT through VCE will be less costly for your organization and less worrisome for a customer’s IT department.“Check out the full study – The Total Economic Impact™ Of Converging SAP Landscapes On Vblock™ Systems – to learn more about virtualizing SAP on VCE Vblock Systems.Share
Technology changes, it’s a fact of life, and sometimes making a multi-year commitment can be a difficult decision. The Dell EMC Future-Proof Storage Loyalty Program gives you additional peace of mind with guaranteed satisfaction and investment protection for those future technology changes.The program covers the Dell EMC Storage Portfolio including; VMAX All-Flash, XtremIO X2, SC Series, Dell EMC Unity, Data Domain, Integrated Data Protection Appliance (IDPA), Isilon and Elastic Cloud Storage (ECS) appliance.Dell EMC Storage and Data Protection offers unbeatable value with a modern, efficient and feature rich product portfolio at no additional cost to you with purchase of a support agreement.Brian Henderson (@BHendu), Storage Portfolio Marketing Director, gives us the details on the 3-Year Satisfaction Guarantee, Hardware Investment Protection and Predictable Support Pricing along with 4:1 All-Flash Storage Efficiency Guarantee, Never-Worry Migratio, All-inclusive Software and Built-In Virtustream Storage Cloud. www.dellemc.com/futureproofGet Dell EMC The Source app in the Apple App Store or Google Play, and Subscribe to the podcast: iTunes, Stitcher Radio or Google Play.Dell EMC The Source Podcast is hosted by Sam Marraccini (@SamMarraccini)
Are you headed to DAC (Design Automation Conference) in San Francisco this week? Dell EMC will be participating in the new Design Infrastructure Alley where our EDA specialists are looking forward to having one-on-one conversations about Isilon All-Flash solutions for EDA and AI.The ecosystem required to support the demands of EDA has become more important with the growth of design sizes and complexity. Storage is a very critical element of this ecosystem with a requirement not just for high performance, but for massive scalability as well – some design projects can require over 1PB of storage!Having spent the past 20 years in the EDA industry, I recognize that EDA tool flows depend critically on storage, and that not having the right infrastructure in place can result in slow turn-around times, reduced throughput, and even delayed time-to-market – ultimately leading to less revenue. During my tenure at Samsung, I architected and built a scalable, highly productive design environment that involved consolidating 4 data centers worldwide into a single on-premises cloud design infrastructure and resulted in significant savings in license and IT costs. While there I also participated in a joint project with Dell EMC and RTDA on the development of a storage-aware grid, with the objective of utilizing storage as an elastic resource in the same way we do cores, licenses and memory. It’s an architecture that allows for maximized job throughput and lowest license costs. I’m very bullish on Isilon scale-out NAS as the platform that can deliver the scale and performance for today’s design environment and the future.After a long career in the EDA industry, where in addition to Samsung I held senior engineering leadership positions at Inphi, Silicon Image, Synopsys, and Siemens, I recently joined Dell EMC as the CTO specializing in the EDA/Semiconductor industry for the Unstructured Data Solutions group. At Dell EMC we are rallying around the value of data capital – that an organization’s data is the source of its wealth and competitive advantage. For EDA, data is the business. My new role is two-fold: spending time with customers to help you achieve your desired strategic outcomes, and what I learn from you back into our business to influence future architecture and design principles. Some of the trends I’ve observed and plan to explore more include the role of object and cloud, how to incorporate deep learning to solve design quality and infrastructure problems, and examining the cost of design holistically.I hope to see you at DAC 2018. You can talk with me and other members of our EDA team in the Dell EMC booth (#1235), and don’t miss our breakout session, Peeling the Onion: How Enterprise Storage Limits Tool Performance and What You Need to Do to Fix It, on Wednesday at 10:30am.
A few weeks ago I was interviewed by Roger Magoulas, VP of O’Reilly Media, at the O’Reilly Artificial Intelligence Conference in San Jose. Our conversation focused on moving beyond the Artificial Intelligence buzz – how organizations can actually design and deploy the optimal IT infrastructure for different AI use cases as they try to move their proof of concept AI work into real production environments. With initiatives of this nature, it’s important to consider how AI drives the demand for higher processing power and throughput. I’ve embedded the video of our interview below.<span style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” data-mce-type=”bookmark” class=”mce_SELRES_start”></span><span style=”display: inline-block; width: 0px; overflow: hidden; line-height: 0;” data-mce-type=”bookmark” class=”mce_SELRES_start”></span>As I discussed in the video, there are two main categories of AI use cases: Machine Learning (ML) and Deep Learning (DL). The use case processing characteristics are quite different, each requiring specific compute and storage envelopes of performance and scale, as highlighted in Figure 1 below.Figure 1: Compute and Storage requirements of ML and DL use casesML pipelines are typically comprised of semi-structured data fed by machines (servers, mobile phones, IoT sensors, etc.) with datasets ranging in size from tens to hundreds of terabytes to maybe even a petabyte or two. ML workloads can be adequately serviced by hundreds of servers up to a few thousand servers. However, it’s a wholly different scenario with DL. These datasets are predominantly unstructured data such as images, video and audio content typically expanding to multiple petabytes, requiring many thousands of compute clusters for processing and may justify GPU investment to cut down on the data center cost and footprint.The location of an organization’s data must also be considered carefully while designing production deployment architecture for AI platforms. In general, it’s my recommendation that you build your AI platforms in the same location as your data. If the majority of your data is generated in the cloud, then it may make sense to run your AI workflows there too as you’d likely face substantial data egress charges to move data on-prem. On the other hand, if your data resides on-prem, then you may deploy AI platforms on-prem. This minimizes the cost of managing the data and the latency in accessing it as well as the need to run data migration projects as a precursor to data analytics.What are other key considerations as you move from PoC to production for your ML and DL workloads? In my work with customers across many industries, I see these common themes:Data Consolidation – it is cumbersome to do analytics with data scattered across an organization. It is a better practice to consolidate the data ideally in one location but more practically in just a few.Decouple compute from storage – an organization’s data may not change significantly over time. However, the applications and tools used to analyze it can change. Therefore, it makes sense to separate compute and storage. Doing so allows you to point evolving server-based applications and tools to where the data is located without moving data around as your compute needs change.Storage Scaling – as data matures, its value may change. This is especially true for historical data. Capacity scaling should design for this dynamic in order to remain cost-effective.Data Governance – as AI becomes more prevalent, the need for protecting and securing the data also gains importance. Data quality, security, protection, lineage and metadata tracking, as well as considerations taken for granted in the Business Intelligence world, are key in the AI world as well.Today, data consolidation paradigms have shifted to the concept of a Data Lake. We define Data Lake as an architectural paradigm that enables us to consolidate enterprise data, enabling storage to scale independently from compute with the ability to support Analytics and AI applications with varying IO signatures, performance requirements and data governance capabilities delivered out of the box. With our industry-leading Dell EMC Isilon scale-out NAS platforms, we’ve been driving this idea for some time now as a way to store and manage exploding volumes of unstructured data. However, to us Data Lake is not just a marketing buzzword. We put real architectural structure and requirements behind it. One key requirement in building a Data Lake is the ability to support multiple access protocols and applications with differing characteristics, real-time or batch mode, and with varying latency needs. Another is the ability to easily and efficiently access data with differing temperatures, whether it is “hot” or “cold.” To transparently archive data for AI platforms, we also offer Dell EMC ECS – our flagship distributed object store.Dell Technologies has helped many customers around the world to unlock the value of their Data Capital for Digital Transformation. With our broad AI platform portfolio, I’m certain we can assist your organization in its journey to AI.Want to learn more about key infrastructure decisions to contemplate as you move forward in your AI journey? View the Moor Insights & Strategy report: Enterprise Machine & Deep Learning with Intelligent Storage.
1 Based on an April 2020 Principled Technologies Report commissioned by Dell EMC, “Dell EMC CloudIQ streamlined the user experience in five cloud-based storage preventive management tasks”, compared to HPE InfoSight with an HPE Primera array vs. CloudIQ with a Dell EMC Unity array. Actual results may vary. Full report: http://facts.pt/m8a5u3v2 Based on a Dell internal survey of Trusted Advisors (Dell Technologies account team and Partners) conducted March 2020, comparing issue resolution with and without CloudIQ. Actual results may vary. Customers love Dell EMC CloudIQ because it delivers actionable insights by combining machine learning and human intelligence to deliver real-time performance and capacity analysis plus historical tracking all in a single-pane glass view.Data grows exponentially each year, with budgets and staffing not growing at the same rate the need for tools that can continue the transition to more autonomous infrastructure is becoming increasingly essential. Making the pivot from to a proactive management is taking that next step towards the autonomous data center. It’s only natural that CloudIQ would evolve to not only provide broader support, but also streamlining management of your data center. CloudIQ supports all major Dell EMC storage platforms, Connectrix switches, and VxBlock converged infrastructure, and we are excited to share that it will continue to expand across the Dell Technologies infrastructure portfolio for even broader data center insights. As we bring CloudIQ across the portfolio, you’ll also see Dell Technologies introduce new features and functionality that are designed to ease management and drive more automation in the data era.In the past year, the CloudIQ team has been hard at work with our product teams to enhance features and add new functionality to help users streamline administrative tasks and to simplify infrastructure management. Leveraging machine learning, CloudIQ helps anticipate customers’ problems turning predictive analytics into actionable insights, enabling algorithms to be continuously updated leveraging Dell EMC product and subject matter expertise. Data is collected on an on-going basis, combined with industry best practices to address the most potentially impactful issues. This provides IT administrators with intel they need to take quick action and more efficiently manage their data center environment.FASTER TIME TO INSIGHT WITH CLOUDIQCloudIQ provides streamlined functionality such as performance and capacity anomaly detection, performance impact analysis, and workload contention identification. With a simple and easy interface, detecting and troubleshooting issues is even easier with CloudIQ.Faster Time to Insight1With over 30,000 arrays CloudIQ connected, processing 30 billion data points per day and adoption growing at a rate of over 2,000 systems each month, CloudIQ is continuously getting smarter to better inform users by arming them with actionable insights.Reduce RiskCloudIQ makes daily storage administration tasks easier by helping you identify potential issues before they impact your environment. CloudIQ proactive health scores give you an at-a-glance view of issues across your environment, prioritizing them so for users surfacing the most imminent risk so quick appropriate action can be taken. Performance anomaly detection and impact analysis use machine learning to zero in on incidents that had an impact on the environment and need remediation. CloudIQ’s VMware integration enables end-to-end analysis of VM activity in the context of the storage systems they are managing, without having to access or view a separate portal.Plan AheadCloudIQ helps to stay ahead of business needs with capacity planning tools such as Capacity Full Prediction which allows you to plan for future budgetary needs. Capacity Anomaly Detection identifies a sudden surge of capacity utilization that could result in imminent Data Unavailability, helping to avoid 2am phone calls.Improve ProductivityCloudIQ helps you make the most of your resources, as both staff time and equipment can be optimized providing a single pane-of-glass view of your environment. You can enable CloudIQ across ALL major systems in the Dell EMC storage platforms, Connectrix switches, and VxBlock converged infrastructure. The breadth of support gives users broad oversight of data center health, with plans to extend support across all ISG portfolio products. The CloudIQ mobile app makes it even easier to check on your data center environment anywhere and anytime.For additional oversight you can grant your account team Trusted Advisor access to receive timely best practice recommendations and guidance to optimize your environment and prevent potential issues, often before you even know there is a problem. Trusted Advisors were asked to evaluate time to resolution for common scenarios with and without CloudIQ, and on average Trusted Advisors reported being able to resolve issues on average 3x faster using CloudIQ.2CloudIQ is available to customers with ProSupport credentials who are connected to our secure remote telemetry, at no additional cost. To learn more about CloudIQ, please visit here.
MEXICO CITY (AP) — Mexico is close to granting approval for Russia’s Sputnik V coronavirus vaccine, with lots of spy drama but little public data available. The approval process described by Mexico’s assistant health secretary Tuesday sounded like a cold-war spy thriller, and may not foment confidence in the shot. Hugo López-Gatell said a Mexican technical committee on new medications has recommended approving the vaccine, adding only “some details” were lacking. But he also said that despite weeks of conversations with Russian officials, he could not get his hands on the results of Phase 3 trials, which would indicate how effective the vaccine is.
WASHINGTON (AP) — The pending Supreme Court case on the fate of the Affordable Care Act could give the Biden administration its first opportunity to chart a new course in front of the justices. The health care case is one of several matters, along with immigration and a separate case on Medicaid work requirements, where the new administration could take a different position from the Trump administration at the high court. The Trump administration called on the justices to strike down the entire Obama-era law. Under that law, some 23 million people get health insurance and millions more with preexisting health conditions are protected from discrimination.
NEW YORK (AP) — Two weeks into his post-presidency, Donald Trump has managed the counter-intuitive trick of dominating the news despite being nearly silent publicly. Trump has kept a norm-breaking hold on the public’s attention despite a tradition of former presidents essentially falling off the radar upon their successor’s inauguration, and despite the shutdown of his favored means of communication on Twitter. A measurement of news page views online shows Trump had more than double the attention of President Joe Biden on most days in January — before and after Biden’s inauguration. With an unprecedented second impeachment trial to begin, that’s not likely to end anytime soon.