Dell to bring model tuning capabilities to AI platform
The firm aims to provide its enterprise customers with easier options to training AI models on proprietary data


Dell Technologies has unveiled new features for fine-tuning generative AI models within its AI solutions and a new multi-cloud data lakehouse to help businesses harness their data more effectively.
Through Dell Validated Design for Generative AI with Nvidia for Model Customization, the firm will help customers tailor pre-trained models to be more effective for their use cases, or meet sector-specific demands.
“Over one-third of enterprises are already considering building their enterprise-specific LLMs,” said Carol Wilder VP ISG, cross-portfolio software and solutions at Dell.
“They’re already finding that their pre-trained models are not sufficient for their success, they’re having to customize those models.”
Dell has also committed to providing customers with guidance and examples for deriving value from tuning and prompt engineering through Dell Professional Services for Generative AI.
Through its partnership with Nvidia, Dell will continue to provide customers models via Nvidia NeMo through Dell Validated Designs for Generative AI, and its preparation, implementation, management, and education services have been updated to include information and assistance on fine-tuning.
Preparing data is key to fine-tuning, and to meet this need Dell and data analytics firm Starburst have announced a new modern data lakehouse solution. The partnership follows a previous announcement at Big Data London 2023 in which the two firms committed to collaborating on data lakehouse technology.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
The data lakehouse solution will be open and sit on top of existing data sources across a customer’s hybrid environment, powered by Dell Object and File Storage, Dell PowerEdge, and Starburst's own platform.
“Many of our customers feel like they’re on a treadmill where they need to consolidate all of their data in one place before their data scientist can start to use it,” said Greg Findlen, SVP ISF, data management at Dell.
“With this solution, customers can leverage the data where it exists and as their data scientists are using that data they can also get a better understanding of what data is the most important to consolidate.”
“The number one priority is accelerating how quickly the data science teams and the AI developer teams can get access to that data from across the organization.”
Clear data structure requirements
At Dell Technologies World 2023, Dell global CTO John Roese told ITPro that firms need a clear structure for their data in order to make the best use of large language models (LLMs), and branded the lack of awareness around this issue “disturbing”.
Findlen clarified that the firm seeks to create a fully integrated set of tools for data management so that customers do not have to manually draw together technologies.
This modern data lakehouse solution will be made available in H1 2024. Dell has no plans at present to release the software through a preview or beta.
Dell Validated Design for Generative AI with NVIDIA for Model Customisation will be made available globally through Dell’s traditional channels, and will also be made accessible through Dell APEX from the end of October.
Benefits of fine-tuning
Inferencing refers to the process through which a pre-built model reacts to information a company gives to it. For example, a model can infer context from a company’s knowledge base and use that information to respond to user inputs in a way consistent with its training.
RELATED RESOURCE
Download this study to understand how Dell servers deliver real-world performance for training and inferencing on AI and ML models
Customization and fine-tuning involve developers having an understanding of how a pre-built model works, and running it through additional rounds of training based on new data to improve its performance at a given task.
This can be used to help make models more efficient, or accurate at producing company-aligned outputs. If a company used a specific in-house programming language, it could train a pre-built code generation model to become more effective at producing code in that specific language.
Meta used fine-tuning to produce a Python-specific version of its code completion LLM Code Llama, named Code Llama — Python. It trained Code Llama, itself created through training Meta’s LLM Llama 2 on 500 billion tokens of code and programming information, on an additional 100 billion tokens of Python data.
In August, OpenAI announced that developers could now fine-tune its LLM GPT-3.5 Turbo, and that fine-tuning for GPT-4 would be made available in the final months of 2023.

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.
In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
Top five security considerations for Generative AI (Gen AI)
whitepaper Protection across AI attack vectors
By ITPro Published
-
Prepare for the future now. Achieve greater, secure productivity, using AI with the latest Dell PCs powered by Intel® Core™ Ultra and Copilot
whitepaper Protection across AI attack vectors
By ITPro Published
-
Deliver engaging and personalized experiences with your own digital assistants
whitepaper Protection across AI attack vectors
By ITPro Published
-
Top five security considerations for generative AI (Gen AI)
whitepaper Accelerate your adoption of a secure and scalable infrastructure foundation with Dell AI Factory with NVIDIA
By ITPro Published
-
The power of generative AI to revolutionize content creation
whitepaper Protection across AI attack vectors
By ITPro Published
-
Your way to AI
whitepaper Protection across AI attack vectors
By ITPro Published
-
Dell Technologies CTO: ROI on AI should be the number one focus for execs
Chasing workforce consensus or playing it safe may cost firms far more in the long run than making decisive moves on AI now and learning from their results
By Rory Bathgate Published
-
Maximize your data insights with AI
whitepaper Chart your path to success by unlocking the power of your data
By ITPro Published