All Blogs

A New Approach to Enterprise AI

Octavian Tanase Octavian Tanase
Chief Product Officer

March 18, 2024


The editorial team of Hitachi Vantara sat down for an interview with our Chief Product Officer, Octavian Tanase, to learn about his vision and strategy for AI in the enterprise.

As we know, AI is not new. Apart from generative AI, what else are you finding that is creating such excitement and possibilities about this new era in AI?

Octavian Tanase: The most valuable car company isn’t Toyota or VW, but Tesla. ChatGPT reached 100M users in 2 months (what Netflix took 10 years). The forefront GPU provider, NVIDIA, just surpassed a $2 trillion market cap.

Yesterday’s AI innovators are today’s leaders, setting the stage for tomorrow’s market dominance. The potential value for enterprises across industries is staggering, ranging from $2.6 to $4.4 trillion annually, with early adopters already reaping financial rewards. Common applications today include chatbots, knowledge management, and software development and documentation. But it does not stop there; the future beholds use cases across verticals, such as diagnostic imaging, drug discovery, fraud detection, route optimization, predictive maintenance, network optimization, and much more. At Hitachi Vantara Exchange, leaders from across industries discussed some of those projects well underway today.

Just a few years back, getting started with AI was a daunting task. However, what’s the situation today? What would you tell a business that wants to build an AI practice?

While the intricacies of technology may seem complex, the essence of constructing AI solutions is no longer an intimidating prospect. Fundamentally, it revolves around three key elements: abundant and high-quality data, advanced data models like neural networks and the requisite IT infrastructure to support it, encompassing GPUs, low-latency/high-speed unstructured data access, and more.

While expertise is certainly crucial, particularly in areas like prompt engineering – an increasingly sought-after skill in the market – the fundamental technological components are relatively straightforward to comprehend at a high level. Moreover, these essential components are not elusive; they are, in fact, readily accessible. Some, like GenAI assistants such as ChatGPT, Google’s Gemini or Microsoft’s CoPilot, are available for free. Others can be accessed with ease through public cloud platforms, such as AWS, Azure, and Google Cloud, often requiring only a subscription fee. This accessibility demystifies the process, making AI solutions within reach for a broader spectrum of users and enterprises.

That said, the deployment of AI at scale and for mission-critical core applications demands a profound sense of responsibility. Companies are increasingly held legally accountable for missteps, such as hallucinations. A recent incident in the airline industry was a warning shot; the potential consequences escalate when critical customer data or core business processes are compromised.

Enterprise requirements for AI extend beyond functionality and encompass vital criteria such as explainability (XAI), observability, traceability, data security, infrastructure scalability and cost-effectiveness. Social responsibility and the practice of responsible AI (RAI) are imperative, emphasizing not only the elimination of biases but also judicious considerations regarding energy consumption. An eye-opening consideration underscores this last point: the training of a single AI model can result in emitting a substantial 300 tons of CO2. This reality underscores the need for ethical, accountable and environmentally conscious AI practices in the pursuit of innovation.

That’s quite a tough spot for business and IT leaders, being asked to innovate. How grave is the situation, and how can they navigate that dilemma?

It’s an open secret that 60 – 80% of AI projects fail, or at least encounter significant setbacks or cost overruns. It’s no surprise that enterprise leaders (on the business and IT side) are cautious. The hesitancy to embrace AI stems from a fundamental lack of trust and an amplified perception of risks, spanning failures, escalating costs, and potential damage to reputation. A few pitfalls to avoid: While public clouds provide excellent platforms for playgrounds and learning the trenches, not all public clouds are created equally, and generally they pose their own challenges regarding cost and lock-in, real-time performance (think autonomous robotics), and the intricacies of solution assembly. Compliance considerations, such as data residency for proprietary and confidential data, further complicate their adoption.

On the other hand, traditional data centers face their own readiness challenges. Most are ill-equipped to meet the demands of high-performance computing, grappling with issues of latency, throughput, power consumption, and cooling requirements, not to mention the burden of traditional high capital expenses. Striking the right balance between the advantages and pitfalls of public clouds and data centers is a nuanced decision for enterprises navigating the complex landscape of technology implementation.

It’s time for a paradigm shift in Enterprise AI. While the foundational elements of data, models and IT infrastructure remain essential, a reengineered approach is necessary to meet the unique demands of enterprises. A more refined data landscape spanning edge, core and hybrid cloud environments is crucial. Models should go beyond large language models (LLMs), incorporating small or standard language models (SLMs) for specialized, real-time applications like grid optimization, futures trading or carrier routing. The IT infrastructure must be meticulously right-sized, cost-effective and sustainable, addressing critical needs in storage, computing (for both training and inference) and networking.

To accelerate the journey to leadership, I expect that enterprises will shift away from constructing proprietary AI platforms from scratch. Instead, their focus will be on developing business applications using well-engineered, trusted and validated AI foundations. I expect that off-the-shelf AI solutions will be able to propel enterprises 60% or 70% of the way, with the remaining 30% crafted using proprietary IP and data for a competitive advantage. This approach significantly expedites time to market while reducing the risks and costs associated with AI initiatives.

This could be an area where Hitachi Vantara is well-positioned to provide assistance. However, please clarify to our audience, especially for those unfamiliar with Hitachi Vantara, or who may think of Hitachi as an industrial company, why they should pay attention to our insights.

Hitachi, which has been inspiring the next for over 100 years, has evolved into a digital-first company, rapidly advancing in the realm of AI. In fiscal 2022 alone, we dedicated $2.4B to R&D, and in 2023, established a $300M corporate fund specifically for digital and AI initiatives. Our leadership extends across key industries, including energy, mobility/transportation, manufacturing and industrials, and financial services. Leveraging expertise in both operational technology (OT) and information technology (IT), we have successfully industrialized AI solutions within our own operations and in collaboration with clients. A notable example is our deployment of AI in control systems for steel production as early as 2021, resulting in tangible benefits such as enhanced quality and yield. Hitachi Ltd excels in building robust ecosystems of partners spanning software, hardware, cloud services and industrial products, showcasing our commitment to innovation and collaborative success.

Hitachi Vantara, on the other hand, was founded 35 years ago as Hitachi Data Systems; it continues to be an innovation powerhouse at the center of Hitachi. Known for legendary unbreakable storage and compute infrastructure, Hitachi Vantara was the first provider ever to deliver a 100% data availability guarantee. And we continue to push the boundaries.

Just to cite one example, Hitachi Content Software for File, HCSF is powering high-resolution video content at Sphere in Las Vegas, a first of its kind handling over 400 gigabytes a second of throughput at sub 5 milliseconds of latency.  That type of performance is required also for the most demanding industrial AI applications. Other superlatives include energy efficiency, being the only storage vendor certified by Carbon Footprint for Products. Our recent Virtual Storage Platform One announced a single control and data plane, across hybrid clouds. And Hitachi Vantara’s hybrid cloud solutions are available as cost-effective, pay-as-you-go, consumption-based solutions via EverFlex. 

With that, let’s talk about your latest announcement, the Hitachi iQ portfolio. I heard it includes a collaboration with NVIDIA. What is that all about?

We’re very excited to introduce Hitachi iQ. It’s an industry-optimized solution suite for AI workloads. Built on our expertise of OT and IT, engineering prowess and partner ecosystems, we’re taking a pragmatic, solution-oriented approach to Enterprise AI. Hitachi iQ goes beyond basic integration and testing by layering industry specific capabilities on top of the AI solution stack, making this more relevant to an organization’s business. Hitachi iQ with NVIDIA’s DGX and HGX will be optimized for industries like manufacturing, transportation, energy, and financial services.

In addition to the DGX BasePOD certification, Hitachi iQ will release with a high-end HGX offering – powered by NVIDIA H100 – and a complement of midrange PCI-E based offerings, consisting of H100 and L40S NVIDIA GPUs. We’ll also provide NVIDIA’s enterprise-grade AI tools and framework, NVIDIA AI Enterprise. Furthermore, utilizing the Hitachi Content Software for File (HCSF) storage technology, Hitachi Vantara will be releasing an accelerated storage node, delivering a fast storage solution for the most complex AI workloads.

It’s not just about technology delivery, though. Already last year, Hitachi announced the creation of a Center for Excellence (COE) for generative AI, supporting customers on their accelerated journeys, while helping to control risks. So that customers can fast-track their paths to become today’s AI leaders, and tomorrow’s market leaders.

And it does not stop there. The future strategy for our iQ portfolio is built on the three pillars of “universal data access & intelligence” (across hybrid cloud) with a single semantic plane, packaged industry solutions (with SLMs), co-pilots built with partnerships in the industrial sector to automate key functions and processes in the delivery and operations, and world-class IT infrastructure solutions from Hitachi Vantara and industry titans such as NVIDIA.

Lastly, we’re here at NVIDIA’s GTC conference today. Please elaborate on our partnership with NVIDIA, and what makes it special.

The recent announcement of our strategic collaboration with NVIDIA marks a significant milestone in accelerating digital transformation through (generative) AI. This partnership signifies a commitment to developing a tailored portfolio of solutions, specifically designed to meet market demands. Our focus is on creating industry-specific AI capabilities that enable swift and actionable insights from data, ultimately expediting digital transformation in both industrial and enterprise sectors.

This collaboration builds upon our existing status as a preferred partner with NVIDIA. But what sets Hitachi Vantara apart and makes us a compelling choice for clients embarking on their AI journeys? Beyond delivering top-tier infrastructure solutions, including those designed for low-latency/high-speed unstructured data access, our strength lies in subject matter expertise. We offer a global reach of services and support, providing a robust framework for successful AI implementation. Additionally, our extensive partner ecosystem further enhances our ability to deliver comprehensive and impactful solutions to our clients.

Octavian, we thank you for the conversation.

Octavian Tanase is Chief Product Officer at Hitachi Vantara.

Learn more about Hitachi Vantara’s solutions for AI.
And be sure to check out 
Insights for perspectives on the data-driven world.


Octavian Tanase

Octavian Tanase

Octavian is a University of California, Berkeley graduate and resides in the San Francisco Bay Area.