IBM AI Team Releases an Open-Source Family of Granite Code Models for Making Coding Easier for Software Developers


IBM has made a great advancement in the field of software development by releasing a set of open-source Granite code models designed to make coding easier for people everywhere. This action stems from the realization that, although software plays a critical role in contemporary society, the process of coding is still difficult and time-consuming. Even seasoned engineers frequently struggle to keep learning new things, adjust to new languages, and solve challenging problems.

Large language models (LLMs) have grown in importance within development environments, contributing to increased efficiency and independence when managing complex programming jobs. The WatsonX Code Assistant (WCA) series, IBM’s most recent innovation, uses the astounding 20-billion parameter capabilities of the Granite large language code model. This technology’s usefulness in corporate environments has been demonstrated by its role in converting COBOL applications into contemporary services that are optimized for IBM Z. 

IBM has made four Granite code model versions, with parameter counts ranging from 3 to 34 billion, publicly available. These models are designed specifically for a variety of coding jobs, such as memory-constrained applications and application modernization. They have undergone a thorough evaluation process to guarantee that they satisfy the highest requirements of performance and adaptability in a variety of coding tasks, including generation, debugging, and explanation.

IBM’s dedication to democratizing access to great technology has been demonstrated by its choice to make these models available under an open-source license. It has publically released these models on sites such as Hugging Face, GitHub, and RHEL AI. Furthermore, these solutions are reliable and trustworthy enough for enterprise adoption, provided strict ethical norms are followed throughout data gathering and model training.

Through its open-sourcing project, IBM hopes to remove the obstacles that come with proprietary models’ high prices and unclear licensing rules and hasten the adoption of generative AI models in the business sector. Because of the Granite code models’ adaptability and corporate workflow optimization, developers have access to a powerful toolbox that can automate repetitive coding activities, improve code quality, and enable smooth integration between legacy and contemporary applications.

The deliberate release of models in different sizes enables developers to select, in accordance with their particular requirements, the ideal compromise between computational efficiency and performance. There are variations with 34 billion, 20 billion, 8 billion, and 3 billion parameters. These models, which are licensed under the Apache 2.0 license, were trained using depth upscaling on an extensive dataset of 4.5 trillion tokens. They include 116 programming languages, which is a broad range. 

The Stack, a comprehensive resource that performs both exact and fuzzy deduplication and filters out low-quality code, has been used in the training data for these models. To further improve the models’ capabilities, natural language data has been merged with the code. This methodology guarantees that the models are adequately prepared to manage a range of coding tasks with effectiveness and efficiency. 

In conclusion, IBM predicts that in the future, coding will be as natural as speaking with an AI assistant, freeing up engineers to concentrate more on creative work and less on repetitive duties. The Granite code models are only the start of IBM’s larger plan to enable developers to use AI technologies to reshape computing in the future.


Check out the Blog and ProjectAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 41k+ ML SubReddit


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.




We will be happy to hear your thoughts

Leave a reply

0
Your Cart is empty!

It looks like you haven't added any items to your cart yet.

Browse Products
Powered by Caddy
Shopping cart